Feb 01 14:20:57 crc systemd[1]: Starting Kubernetes Kubelet... Feb 01 14:20:57 crc restorecon[4766]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:57 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 14:20:58 crc restorecon[4766]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 01 14:20:58 crc kubenswrapper[4820]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 01 14:20:58 crc kubenswrapper[4820]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 01 14:20:58 crc kubenswrapper[4820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 01 14:20:58 crc kubenswrapper[4820]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 01 14:20:58 crc kubenswrapper[4820]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 01 14:20:58 crc kubenswrapper[4820]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.927365 4820 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932673 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932709 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932720 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932729 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932738 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932748 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932757 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932766 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932774 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932782 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932789 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932797 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932806 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932816 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932827 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932836 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932846 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932854 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932863 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932877 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932924 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932933 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932942 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932952 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932961 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932970 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932978 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932988 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.932999 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933007 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933015 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933024 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933062 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933073 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933081 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933091 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933101 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933110 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933120 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933132 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933141 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933150 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933158 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933168 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933182 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933191 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933200 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933208 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933216 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933225 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933234 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933242 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933249 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933258 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933266 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933276 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933285 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933299 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933308 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933318 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933325 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933336 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933345 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933353 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933360 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933368 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933375 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933385 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933393 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933401 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.933408 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933575 4820 flags.go:64] FLAG: --address="0.0.0.0" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933592 4820 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933607 4820 flags.go:64] FLAG: --anonymous-auth="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933620 4820 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933635 4820 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933644 4820 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933657 4820 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933668 4820 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933679 4820 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933689 4820 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933699 4820 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933708 4820 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933717 4820 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933727 4820 flags.go:64] FLAG: --cgroup-root="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933736 4820 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933745 4820 flags.go:64] FLAG: --client-ca-file="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933754 4820 flags.go:64] FLAG: --cloud-config="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933764 4820 flags.go:64] FLAG: --cloud-provider="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933774 4820 flags.go:64] FLAG: --cluster-dns="[]" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933786 4820 flags.go:64] FLAG: --cluster-domain="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933796 4820 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933805 4820 flags.go:64] FLAG: --config-dir="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933814 4820 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933824 4820 flags.go:64] FLAG: --container-log-max-files="5" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933836 4820 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933846 4820 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933855 4820 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933864 4820 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933873 4820 flags.go:64] FLAG: --contention-profiling="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933938 4820 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933948 4820 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933957 4820 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933966 4820 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933979 4820 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933988 4820 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.933998 4820 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934007 4820 flags.go:64] FLAG: --enable-load-reader="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934017 4820 flags.go:64] FLAG: --enable-server="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934026 4820 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934037 4820 flags.go:64] FLAG: --event-burst="100" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934048 4820 flags.go:64] FLAG: --event-qps="50" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934058 4820 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934067 4820 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934076 4820 flags.go:64] FLAG: --eviction-hard="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934087 4820 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934096 4820 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934105 4820 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934114 4820 flags.go:64] FLAG: --eviction-soft="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934123 4820 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934133 4820 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934142 4820 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934152 4820 flags.go:64] FLAG: --experimental-mounter-path="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934161 4820 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934170 4820 flags.go:64] FLAG: --fail-swap-on="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934179 4820 flags.go:64] FLAG: --feature-gates="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934190 4820 flags.go:64] FLAG: --file-check-frequency="20s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934199 4820 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934208 4820 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934218 4820 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934227 4820 flags.go:64] FLAG: --healthz-port="10248" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934236 4820 flags.go:64] FLAG: --help="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934245 4820 flags.go:64] FLAG: --hostname-override="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934254 4820 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934263 4820 flags.go:64] FLAG: --http-check-frequency="20s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934272 4820 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934281 4820 flags.go:64] FLAG: --image-credential-provider-config="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934289 4820 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934298 4820 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934307 4820 flags.go:64] FLAG: --image-service-endpoint="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934316 4820 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934324 4820 flags.go:64] FLAG: --kube-api-burst="100" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934333 4820 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934342 4820 flags.go:64] FLAG: --kube-api-qps="50" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934351 4820 flags.go:64] FLAG: --kube-reserved="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934360 4820 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934368 4820 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934378 4820 flags.go:64] FLAG: --kubelet-cgroups="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934388 4820 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934396 4820 flags.go:64] FLAG: --lock-file="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934405 4820 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934414 4820 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934425 4820 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934441 4820 flags.go:64] FLAG: --log-json-split-stream="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934451 4820 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934463 4820 flags.go:64] FLAG: --log-text-split-stream="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934473 4820 flags.go:64] FLAG: --logging-format="text" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934484 4820 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934496 4820 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934508 4820 flags.go:64] FLAG: --manifest-url="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934519 4820 flags.go:64] FLAG: --manifest-url-header="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934533 4820 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934544 4820 flags.go:64] FLAG: --max-open-files="1000000" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934559 4820 flags.go:64] FLAG: --max-pods="110" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934572 4820 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934582 4820 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934592 4820 flags.go:64] FLAG: --memory-manager-policy="None" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934602 4820 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934614 4820 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934626 4820 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934638 4820 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934667 4820 flags.go:64] FLAG: --node-status-max-images="50" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934679 4820 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934690 4820 flags.go:64] FLAG: --oom-score-adj="-999" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934702 4820 flags.go:64] FLAG: --pod-cidr="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934714 4820 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934731 4820 flags.go:64] FLAG: --pod-manifest-path="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934742 4820 flags.go:64] FLAG: --pod-max-pids="-1" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934754 4820 flags.go:64] FLAG: --pods-per-core="0" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934764 4820 flags.go:64] FLAG: --port="10250" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934774 4820 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934783 4820 flags.go:64] FLAG: --provider-id="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934792 4820 flags.go:64] FLAG: --qos-reserved="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934804 4820 flags.go:64] FLAG: --read-only-port="10255" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934823 4820 flags.go:64] FLAG: --register-node="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934834 4820 flags.go:64] FLAG: --register-schedulable="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934843 4820 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934862 4820 flags.go:64] FLAG: --registry-burst="10" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934881 4820 flags.go:64] FLAG: --registry-qps="5" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934922 4820 flags.go:64] FLAG: --reserved-cpus="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934933 4820 flags.go:64] FLAG: --reserved-memory="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934946 4820 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934956 4820 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934967 4820 flags.go:64] FLAG: --rotate-certificates="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934976 4820 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934985 4820 flags.go:64] FLAG: --runonce="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.934994 4820 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935004 4820 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935013 4820 flags.go:64] FLAG: --seccomp-default="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935022 4820 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935031 4820 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935041 4820 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935051 4820 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935060 4820 flags.go:64] FLAG: --storage-driver-password="root" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935069 4820 flags.go:64] FLAG: --storage-driver-secure="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935078 4820 flags.go:64] FLAG: --storage-driver-table="stats" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935087 4820 flags.go:64] FLAG: --storage-driver-user="root" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935096 4820 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935106 4820 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935115 4820 flags.go:64] FLAG: --system-cgroups="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935124 4820 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935139 4820 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935148 4820 flags.go:64] FLAG: --tls-cert-file="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935157 4820 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935172 4820 flags.go:64] FLAG: --tls-min-version="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935181 4820 flags.go:64] FLAG: --tls-private-key-file="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935192 4820 flags.go:64] FLAG: --topology-manager-policy="none" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935205 4820 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935214 4820 flags.go:64] FLAG: --topology-manager-scope="container" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935225 4820 flags.go:64] FLAG: --v="2" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935238 4820 flags.go:64] FLAG: --version="false" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935251 4820 flags.go:64] FLAG: --vmodule="" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935262 4820 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.935272 4820 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935491 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935501 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935511 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935521 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935530 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935538 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935547 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935556 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935567 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935576 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935584 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935595 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935605 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935615 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935623 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935631 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935639 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935647 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935655 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935662 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935670 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935678 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935685 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935693 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935702 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935711 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935719 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935726 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935734 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935742 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935750 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935759 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935770 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935780 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935790 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935798 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935808 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935815 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935823 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935831 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935841 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935851 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935860 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935869 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935912 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935921 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935929 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935939 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935949 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935960 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935970 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935980 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.935990 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936000 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936011 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936021 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936031 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936042 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936052 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936061 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936069 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936077 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936086 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936095 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936105 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936115 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936125 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936138 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936149 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936160 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.936170 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.937141 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.954541 4820 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.954652 4820 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954832 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954856 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954868 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954921 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954935 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954946 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954958 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954969 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954982 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.954992 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955003 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955013 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955023 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955057 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955068 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955079 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955090 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955099 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955108 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955117 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955127 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955137 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955146 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955155 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955168 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955181 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955191 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955203 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955213 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955222 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955232 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955241 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955253 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955265 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955275 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955285 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955296 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955307 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955318 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955330 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955341 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955352 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955363 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955373 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955383 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955394 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955404 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955415 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955424 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955434 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955445 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955494 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955509 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955521 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955535 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955549 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955560 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955571 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955582 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955598 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955619 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955633 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955645 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955655 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955666 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955679 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955690 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955703 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955716 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955727 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.955737 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.955755 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956124 4820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956152 4820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956163 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956173 4820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956186 4820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956202 4820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956214 4820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956225 4820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956236 4820 feature_gate.go:330] unrecognized feature gate: Example Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956246 4820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956256 4820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956266 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956276 4820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956286 4820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956296 4820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956306 4820 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956319 4820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956329 4820 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956339 4820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956351 4820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956363 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956374 4820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956385 4820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956396 4820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956407 4820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956416 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956427 4820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956438 4820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956449 4820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956460 4820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956470 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956480 4820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956492 4820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956502 4820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956512 4820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956523 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956533 4820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956543 4820 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956553 4820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956566 4820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956579 4820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956590 4820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956601 4820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956613 4820 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956623 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956634 4820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956645 4820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956655 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956666 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956677 4820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956687 4820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956697 4820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956710 4820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956720 4820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956733 4820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956746 4820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956759 4820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956773 4820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956785 4820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956796 4820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956806 4820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956817 4820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956827 4820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956838 4820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956849 4820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956859 4820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956870 4820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956924 4820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956935 4820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956946 4820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 01 14:20:58 crc kubenswrapper[4820]: W0201 14:20:58.956957 4820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.956974 4820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.958712 4820 server.go:940] "Client rotation is on, will bootstrap in background" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.965853 4820 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.966051 4820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.968016 4820 server.go:997] "Starting client certificate rotation" Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.968068 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.969102 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-21 04:01:19.56046258 +0000 UTC Feb 01 14:20:58 crc kubenswrapper[4820]: I0201 14:20:58.969218 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.001173 4820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.004000 4820 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.004767 4820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.023143 4820 log.go:25] "Validated CRI v1 runtime API" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.070116 4820 log.go:25] "Validated CRI v1 image API" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.072293 4820 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.081023 4820 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-01-14-16-12-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.081081 4820 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.095610 4820 manager.go:217] Machine: {Timestamp:2026-02-01 14:20:59.092997654 +0000 UTC m=+0.613363948 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e802f971-4889-4bb7-b640-2de29e2c4a97 BootID:b8e7bd4f-d2dd-4ff5-b41a-2605330b4088 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b6:ce:77 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b6:ce:77 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6b:15:b7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:51:ff:be Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b2:9a:ff Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:19:9e:68 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:df:b9:bc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:c0:85:c8:54:a5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ca:c5:6d:dd:28:fa Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.095846 4820 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.096022 4820 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.096325 4820 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.096497 4820 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.096546 4820 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.096786 4820 topology_manager.go:138] "Creating topology manager with none policy" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.096799 4820 container_manager_linux.go:303] "Creating device plugin manager" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.098474 4820 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.098511 4820 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.099496 4820 state_mem.go:36] "Initialized new in-memory state store" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.099595 4820 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.104058 4820 kubelet.go:418] "Attempting to sync node with API server" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.104085 4820 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.104122 4820 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.104135 4820 kubelet.go:324] "Adding apiserver pod source" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.104152 4820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.109808 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.109862 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.109926 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.109992 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.110952 4820 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.112366 4820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.115365 4820 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117152 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117197 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117214 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117228 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117306 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117323 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117337 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117360 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117377 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117392 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117443 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.117458 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.118881 4820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.119793 4820 server.go:1280] "Started kubelet" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.120239 4820 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.120478 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.121021 4820 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.121222 4820 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 01 14:20:59 crc systemd[1]: Started Kubernetes Kubelet. Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.123874 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.123974 4820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.128989 4820 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.129028 4820 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.129111 4820 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.129965 4820 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.130546 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.128822 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:33:09.153356904 +0000 UTC Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.132201 4820 factory.go:55] Registering systemd factory Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.132250 4820 factory.go:221] Registration of the systemd container factory successfully Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.132284 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.132478 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.131574 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890254b5a056722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 14:20:59.119683362 +0000 UTC m=+0.640049686,LastTimestamp:2026-02-01 14:20:59.119683362 +0000 UTC m=+0.640049686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.133530 4820 factory.go:153] Registering CRI-O factory Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.133643 4820 factory.go:221] Registration of the crio container factory successfully Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.134033 4820 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.134099 4820 factory.go:103] Registering Raw factory Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.135565 4820 manager.go:1196] Started watching for new ooms in manager Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.136661 4820 server.go:460] "Adding debug handlers to kubelet server" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.137527 4820 manager.go:319] Starting recovery of all containers Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146035 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146124 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146143 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146160 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146176 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146192 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146208 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146224 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146243 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146258 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146273 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146289 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146306 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146325 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146478 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146510 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146528 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146545 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146562 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146579 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146595 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146612 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146626 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146641 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146656 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146709 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146731 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146775 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146796 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146814 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146831 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146851 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146868 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146911 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146931 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146948 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146967 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.146986 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147004 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147022 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147039 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147057 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147075 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147090 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147106 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147124 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147144 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147164 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147217 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147235 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147252 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147268 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147291 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147313 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147331 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147348 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147365 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147379 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.147395 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150288 4820 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150340 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150362 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150380 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150399 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150419 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150435 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150453 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150471 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150488 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150506 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150523 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150542 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150559 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150577 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150598 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150620 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150637 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150654 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150694 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150716 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150735 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150753 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150772 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150791 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150809 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150826 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150845 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150881 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.150991 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151013 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151029 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151046 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151063 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151079 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151102 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151123 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151139 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151155 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151172 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151193 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151210 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151226 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151242 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151260 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151280 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151350 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151371 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151437 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151459 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151477 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151495 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151517 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151535 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151551 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151572 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151633 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151652 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151669 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151686 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151704 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151720 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151736 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151804 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151823 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151840 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151858 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151946 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151971 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.151988 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152006 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152021 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152038 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152055 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152074 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152088 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152105 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152123 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152139 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152156 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152171 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152188 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152205 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152223 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152241 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152258 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152274 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152289 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152308 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152325 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152389 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152409 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152431 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152447 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152463 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152481 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152498 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152517 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152534 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152555 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152572 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152590 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152606 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152623 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152641 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152660 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152677 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152694 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152710 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152726 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152741 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152756 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152772 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152787 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152803 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152820 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152839 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152856 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152879 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152921 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152938 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152956 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152972 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.152990 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153005 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153024 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153042 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153060 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153077 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153093 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153112 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153129 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153148 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153170 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153188 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153204 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153222 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153238 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153255 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153273 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153290 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153306 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153325 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153342 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153359 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153377 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153393 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153409 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153428 4820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153444 4820 reconstruct.go:97] "Volume reconstruction finished" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.153456 4820 reconciler.go:26] "Reconciler: start to sync state" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.169022 4820 manager.go:324] Recovery completed Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.190195 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.192661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.192726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.192747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.193547 4820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.194049 4820 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.194174 4820 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.194261 4820 state_mem.go:36] "Initialized new in-memory state store" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.197409 4820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.197473 4820 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.197510 4820 kubelet.go:2335] "Starting kubelet main sync loop" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.197616 4820 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.198218 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.198288 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.211303 4820 policy_none.go:49] "None policy: Start" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.218442 4820 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.218495 4820 state_mem.go:35] "Initializing new in-memory state store" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.229525 4820 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.282925 4820 manager.go:334] "Starting Device Plugin manager" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.283005 4820 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.283028 4820 server.go:79] "Starting device plugin registration server" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.283483 4820 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.283508 4820 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.283755 4820 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.283862 4820 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.283879 4820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.294449 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.298686 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.298769 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.299684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.299713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.299724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.299893 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.300091 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.300172 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.300600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.300644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.300728 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.300859 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.301032 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.301078 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.301399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.301420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.301428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.302787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.302813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.302821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.303070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.303118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.303130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.303305 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.303532 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.303598 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304186 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304284 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304308 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.304591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.305174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.305198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.305206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.305642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.305685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.305700 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.306002 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.306049 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.307011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.307037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.307047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.331148 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.356567 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.356624 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.356685 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.356712 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.356813 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.356974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357346 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.357380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.384505 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.389206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.390057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.390077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.390125 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.390912 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.458504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.458798 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.458688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.458870 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.458992 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.458974 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459250 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459268 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459299 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459317 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459349 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459364 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459564 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459141 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459866 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459916 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459939 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459894 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.459955 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.591112 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.592801 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.592962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.593057 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.593158 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.593763 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.631699 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.637337 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.680155 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.682826 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ca635b4bee73e6bcfbc579b89702579d4db8c3bd4b6b803d8cc368ff6cd65225 WatchSource:0}: Error finding container ca635b4bee73e6bcfbc579b89702579d4db8c3bd4b6b803d8cc368ff6cd65225: Status 404 returned error can't find the container with id ca635b4bee73e6bcfbc579b89702579d4db8c3bd4b6b803d8cc368ff6cd65225 Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.691852 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.693185 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.700147 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-589dd4e078a70be8337f4876539f921419e314b280595faa1be6b2ada3dcbf45 WatchSource:0}: Error finding container 589dd4e078a70be8337f4876539f921419e314b280595faa1be6b2ada3dcbf45: Status 404 returned error can't find the container with id 589dd4e078a70be8337f4876539f921419e314b280595faa1be6b2ada3dcbf45 Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.711826 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-de9736fac06b024bc845ae4ca492efdeb52e4c9f1d353e6bddfb0f3824e398b4 WatchSource:0}: Error finding container de9736fac06b024bc845ae4ca492efdeb52e4c9f1d353e6bddfb0f3824e398b4: Status 404 returned error can't find the container with id de9736fac06b024bc845ae4ca492efdeb52e4c9f1d353e6bddfb0f3824e398b4 Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.713432 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0e3583dda21acfb7700c11cda560ca00e490f2a932b4c977417bdf0361d2227c WatchSource:0}: Error finding container 0e3583dda21acfb7700c11cda560ca00e490f2a932b4c977417bdf0361d2227c: Status 404 returned error can't find the container with id 0e3583dda21acfb7700c11cda560ca00e490f2a932b4c977417bdf0361d2227c Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.732320 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Feb 01 14:20:59 crc kubenswrapper[4820]: W0201 14:20:59.983864 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.984083 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.994648 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.997448 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.997547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.997563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:20:59 crc kubenswrapper[4820]: I0201 14:20:59.997619 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 14:20:59 crc kubenswrapper[4820]: E0201 14:20:59.998565 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.121653 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:00 crc kubenswrapper[4820]: W0201 14:21:00.127265 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:00 crc kubenswrapper[4820]: E0201 14:21:00.127388 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.131951 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:43:57.795101942 +0000 UTC Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.204074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"731ae59ee1e69f090066aea245914d8e21ba2856bbe195864b4de718a40a67ae"} Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.205288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ca635b4bee73e6bcfbc579b89702579d4db8c3bd4b6b803d8cc368ff6cd65225"} Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.206438 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0e3583dda21acfb7700c11cda560ca00e490f2a932b4c977417bdf0361d2227c"} Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.208087 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de9736fac06b024bc845ae4ca492efdeb52e4c9f1d353e6bddfb0f3824e398b4"} Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.209484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"589dd4e078a70be8337f4876539f921419e314b280595faa1be6b2ada3dcbf45"} Feb 01 14:21:00 crc kubenswrapper[4820]: W0201 14:21:00.398592 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:00 crc kubenswrapper[4820]: E0201 14:21:00.399149 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:21:00 crc kubenswrapper[4820]: E0201 14:21:00.533599 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Feb 01 14:21:00 crc kubenswrapper[4820]: W0201 14:21:00.541313 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:00 crc kubenswrapper[4820]: E0201 14:21:00.541409 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:21:00 crc kubenswrapper[4820]: E0201 14:21:00.707018 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890254b5a056722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 14:20:59.119683362 +0000 UTC m=+0.640049686,LastTimestamp:2026-02-01 14:20:59.119683362 +0000 UTC m=+0.640049686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.799039 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.800738 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.800788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.800801 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:00 crc kubenswrapper[4820]: I0201 14:21:00.800835 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 14:21:00 crc kubenswrapper[4820]: E0201 14:21:00.801426 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.121895 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.132269 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:13:09.122186359 +0000 UTC Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.152567 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 01 14:21:01 crc kubenswrapper[4820]: E0201 14:21:01.153768 4820 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.214592 4820 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4" exitCode=0 Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.214684 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4"} Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.214774 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.216175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.216212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.216223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.219070 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.219694 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7"} Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.219821 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a"} Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.219858 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8"} Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.219925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f"} Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.220466 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.220492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.220500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.222434 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c" exitCode=0 Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.222483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c"} Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.222570 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.223436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.223467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.223478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.223777 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e8eb52f4a00a01d1201a3233d74bf0007a46979f5c5a5c7177d368fe5f77eb37" exitCode=0 Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.223834 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.223851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e8eb52f4a00a01d1201a3233d74bf0007a46979f5c5a5c7177d368fe5f77eb37"} Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.224908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.224946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.224960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.225139 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.225603 4820 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408" exitCode=0 Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.225636 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408"} Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.225669 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.226605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.226629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.226639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.226636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.226722 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.226733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:01 crc kubenswrapper[4820]: I0201 14:21:01.926840 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:21:01 crc kubenswrapper[4820]: W0201 14:21:01.990805 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:01 crc kubenswrapper[4820]: E0201 14:21:01.990907 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.121520 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.133423 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:52:02.784611632 +0000 UTC Feb 01 14:21:02 crc kubenswrapper[4820]: E0201 14:21:02.135172 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.231619 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538"} Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.231698 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf"} Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.231722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba"} Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.231709 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.234370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.234422 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.234437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.235909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12"} Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.235950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51"} Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.235961 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d"} Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.239752 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1c3b7c032364962c991180de3f2f83db0f5a4b7735ba6f1012c01d451774ad0e" exitCode=0 Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.239864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1c3b7c032364962c991180de3f2f83db0f5a4b7735ba6f1012c01d451774ad0e"} Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.239951 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.240993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.241029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.241046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.248177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f676e6b13a32237c36128907682976a34657cec50f75a143a51a82ee71b1dcd0"} Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.248207 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.248228 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.251188 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.251230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.251244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.251507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.251546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.251558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:02 crc kubenswrapper[4820]: W0201 14:21:02.313342 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:02 crc kubenswrapper[4820]: E0201 14:21:02.313466 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.402581 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.403859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.403943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.403958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:02 crc kubenswrapper[4820]: I0201 14:21:02.403995 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 14:21:02 crc kubenswrapper[4820]: E0201 14:21:02.404803 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 01 14:21:02 crc kubenswrapper[4820]: W0201 14:21:02.672942 4820 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 01 14:21:02 crc kubenswrapper[4820]: E0201 14:21:02.673045 4820 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.133815 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:12:51.925895146 +0000 UTC Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.253260 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d"} Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.253303 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d"} Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.253412 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.254632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.254663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.254673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.256013 4820 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7e0cab6cc8e0a21837eadc200f79a27f0d5a832a9476d96babf08069faa2b7f0" exitCode=0 Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.256099 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.256149 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.256172 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.256174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7e0cab6cc8e0a21837eadc200f79a27f0d5a832a9476d96babf08069faa2b7f0"} Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.256222 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.256278 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.257054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.257077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.257087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.257113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.257132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.257142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.258047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.258064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.258068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.258074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.258082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.258091 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.515941 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.522330 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:21:03 crc kubenswrapper[4820]: I0201 14:21:03.820005 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.134181 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:32:25.952963146 +0000 UTC Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.262537 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0cccd82e3c896432eadd8aa4e3682175b4718bcc25833ebcfc5290f6d253b6fc"} Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.262580 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"63f771cd0c8c06eddf1c69c24299aabf7705bca5971e50e6d710ec70c67cc9dc"} Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.262590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c1de1b60eca623b40c4adf3f2577dc4f5e0b046899016d557b7601eff47c2d7"} Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.262598 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a2cce4d029599b5c9a47abf8f2db0af9adf904c6fd69bd899b6c324603dae407"} Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.262607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"131f417c4046669c093d889bf48b10b8708fa0e3db7ba8624168ade0273a1098"} Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.262641 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.262944 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.263109 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.263152 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.264279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.264310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.264330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.264387 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.264402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.264410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.265966 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.267254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.267294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.267307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.268060 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.268091 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.268104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.798072 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.928002 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:21:04 crc kubenswrapper[4820]: I0201 14:21:04.928118 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.134571 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:09:13.076370588 +0000 UTC Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.155716 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.264145 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.264251 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.264322 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.264404 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.265731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.265777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.265788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.265988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.266051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.266076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.266013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.266132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.266147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.605512 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.607003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.607096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.607117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.607156 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 14:21:05 crc kubenswrapper[4820]: I0201 14:21:05.807500 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.135714 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:24:44.146106839 +0000 UTC Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.266509 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.267926 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.267965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.267975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.458279 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.458419 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.458455 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.459579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.459618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:06 crc kubenswrapper[4820]: I0201 14:21:06.459628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.023167 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.023343 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.024458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.024487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.024495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.136369 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:38:20.266885012 +0000 UTC Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.269029 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.270301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.270335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:07 crc kubenswrapper[4820]: I0201 14:21:07.270346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:08 crc kubenswrapper[4820]: I0201 14:21:08.078526 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:21:08 crc kubenswrapper[4820]: I0201 14:21:08.078681 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:08 crc kubenswrapper[4820]: I0201 14:21:08.080229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:08 crc kubenswrapper[4820]: I0201 14:21:08.080297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:08 crc kubenswrapper[4820]: I0201 14:21:08.080320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:08 crc kubenswrapper[4820]: I0201 14:21:08.136914 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:16:22.936156686 +0000 UTC Feb 01 14:21:09 crc kubenswrapper[4820]: I0201 14:21:09.137988 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:55:47.823367599 +0000 UTC Feb 01 14:21:09 crc kubenswrapper[4820]: E0201 14:21:09.295563 4820 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 01 14:21:10 crc kubenswrapper[4820]: I0201 14:21:10.138224 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:18:30.285889597 +0000 UTC Feb 01 14:21:11 crc kubenswrapper[4820]: I0201 14:21:11.138732 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:06:27.406648792 +0000 UTC Feb 01 14:21:12 crc kubenswrapper[4820]: I0201 14:21:12.139489 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:12:33.402806526 +0000 UTC Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.122262 4820 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.139813 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:51:33.495479485 +0000 UTC Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.374103 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.374251 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.381386 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.381481 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.750556 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.750740 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.752181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.752236 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.752251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.788443 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.825202 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]log ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]etcd ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/generic-apiserver-start-informers ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-filter ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-apiextensions-informers ok Feb 01 14:21:13 crc kubenswrapper[4820]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 01 14:21:13 crc kubenswrapper[4820]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-system-namespaces-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 01 14:21:13 crc kubenswrapper[4820]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 01 14:21:13 crc kubenswrapper[4820]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/bootstrap-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/start-kube-aggregator-informers ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/apiservice-registration-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/apiservice-discovery-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]autoregister-completion ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/apiservice-openapi-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 01 14:21:13 crc kubenswrapper[4820]: livez check failed Feb 01 14:21:13 crc kubenswrapper[4820]: I0201 14:21:13.825263 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:21:14 crc kubenswrapper[4820]: I0201 14:21:14.140423 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:45:51.247467369 +0000 UTC Feb 01 14:21:14 crc kubenswrapper[4820]: I0201 14:21:14.287852 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:14 crc kubenswrapper[4820]: I0201 14:21:14.288709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:14 crc kubenswrapper[4820]: I0201 14:21:14.288758 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:14 crc kubenswrapper[4820]: I0201 14:21:14.288772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:14 crc kubenswrapper[4820]: I0201 14:21:14.303094 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 01 14:21:14 crc kubenswrapper[4820]: I0201 14:21:14.928373 4820 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:21:14 crc kubenswrapper[4820]: I0201 14:21:14.928492 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:21:15 crc kubenswrapper[4820]: I0201 14:21:15.141322 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:03:08.08443327 +0000 UTC Feb 01 14:21:15 crc kubenswrapper[4820]: I0201 14:21:15.290767 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:15 crc kubenswrapper[4820]: I0201 14:21:15.291921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:15 crc kubenswrapper[4820]: I0201 14:21:15.292002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:15 crc kubenswrapper[4820]: I0201 14:21:15.292028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:16 crc kubenswrapper[4820]: I0201 14:21:16.141914 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:53:01.20065269 +0000 UTC Feb 01 14:21:17 crc kubenswrapper[4820]: I0201 14:21:17.142937 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:26:10.133456532 +0000 UTC Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.083841 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.083980 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.085025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.085070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.085085 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.143899 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:42:32.208240051 +0000 UTC Feb 01 14:21:18 crc kubenswrapper[4820]: E0201 14:21:18.376696 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.379953 4820 trace.go:236] Trace[110854176]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 14:21:07.903) (total time: 10476ms): Feb 01 14:21:18 crc kubenswrapper[4820]: Trace[110854176]: ---"Objects listed" error: 10476ms (14:21:18.379) Feb 01 14:21:18 crc kubenswrapper[4820]: Trace[110854176]: [10.476524607s] [10.476524607s] END Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.389651 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.390100 4820 trace.go:236] Trace[1966056194]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 14:21:05.630) (total time: 12759ms): Feb 01 14:21:18 crc kubenswrapper[4820]: Trace[1966056194]: ---"Objects listed" error: 12759ms (14:21:18.389) Feb 01 14:21:18 crc kubenswrapper[4820]: Trace[1966056194]: [12.759744717s] [12.759744717s] END Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.390228 4820 trace.go:236] Trace[284095407]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 14:21:06.533) (total time: 11857ms): Feb 01 14:21:18 crc kubenswrapper[4820]: Trace[284095407]: ---"Objects listed" error: 11857ms (14:21:18.390) Feb 01 14:21:18 crc kubenswrapper[4820]: Trace[284095407]: [11.857052753s] [11.857052753s] END Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.390257 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.390242 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 01 14:21:18 crc kubenswrapper[4820]: E0201 14:21:18.390663 4820 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.390771 4820 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.393985 4820 trace.go:236] Trace[1080050913]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 14:21:03.496) (total time: 14896ms): Feb 01 14:21:18 crc kubenswrapper[4820]: Trace[1080050913]: ---"Objects listed" error: 14896ms (14:21:18.393) Feb 01 14:21:18 crc kubenswrapper[4820]: Trace[1080050913]: [14.896995871s] [14.896995871s] END Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.394017 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.403545 4820 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.425699 4820 csr.go:261] certificate signing request csr-qcls8 is approved, waiting to be issued Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.431040 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33534->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.431105 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33534->192.168.126.11:17697: read: connection reset by peer" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.431169 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33550->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.431231 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33550->192.168.126.11:17697: read: connection reset by peer" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.438786 4820 csr.go:257] certificate signing request csr-qcls8 is issued Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.825432 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.825956 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.826007 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.830275 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:21:18 crc kubenswrapper[4820]: I0201 14:21:18.967295 4820 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 01 14:21:18 crc kubenswrapper[4820]: W0201 14:21:18.967492 4820 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 01 14:21:18 crc kubenswrapper[4820]: W0201 14:21:18.967529 4820 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 01 14:21:18 crc kubenswrapper[4820]: W0201 14:21:18.967545 4820 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 01 14:21:18 crc kubenswrapper[4820]: W0201 14:21:18.967652 4820 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 01 14:21:18 crc kubenswrapper[4820]: E0201 14:21:18.967573 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.73:45926->38.102.83.73:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1890254b7bbee899 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 14:20:59.685488793 +0000 UTC m=+1.205855077,LastTimestamp:2026-02-01 14:20:59.685488793 +0000 UTC m=+1.205855077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.112992 4820 apiserver.go:52] "Watching apiserver" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.120040 4820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.120310 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.120770 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.120835 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.120938 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.121035 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.121084 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.121212 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.121220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.121487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.121547 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.122544 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.122747 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.122849 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.123091 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.123123 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.123141 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.123853 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.123910 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.124224 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.130754 4820 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.144268 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:11:36.58855734 +0000 UTC Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.155255 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.172707 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.182643 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.193653 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195760 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195800 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195901 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195927 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195948 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195970 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.195987 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196002 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196019 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196034 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196049 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196083 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196115 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196146 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196173 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196197 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196216 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196286 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196306 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196324 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196340 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196363 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196385 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196398 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196401 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196444 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196463 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196483 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196523 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196523 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196550 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196571 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196600 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196621 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196636 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196652 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196667 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196685 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196701 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196710 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196715 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196746 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196764 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196800 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196819 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196835 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196852 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196887 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196933 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196951 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196967 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196983 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196999 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197016 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197046 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197061 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197075 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197091 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197140 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197155 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197170 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197186 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197200 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197216 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197248 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197265 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197283 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197364 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197430 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197449 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197467 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197484 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197499 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197532 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197548 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197564 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197580 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197596 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197612 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197653 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197679 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197713 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197728 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197751 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197781 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197798 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197814 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197834 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197860 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197908 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197934 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197955 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197975 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197992 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198007 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198042 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198075 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198125 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198171 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198274 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198295 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198317 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198386 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196718 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198394 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196719 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196785 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196907 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.196984 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197150 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197169 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197317 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197324 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197524 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198562 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197690 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197729 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197766 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197787 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197976 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.198610 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:21:19.698586628 +0000 UTC m=+21.218952922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198807 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198870 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199172 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199242 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199309 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199404 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199438 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199450 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.197586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199729 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199742 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199731 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199802 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199822 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199843 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199861 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199895 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199913 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.199960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200149 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.198388 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200223 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200267 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200280 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200470 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200519 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200547 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200559 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200590 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200607 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200642 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200660 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200678 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200711 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200728 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200745 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200761 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200760 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200776 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200795 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200810 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200811 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200825 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200841 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200857 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200886 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200935 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200959 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200974 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.200988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201003 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201018 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201048 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201063 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201079 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201094 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201111 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201126 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201168 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201182 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201214 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201230 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201246 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201279 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201296 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201311 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201326 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201340 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201375 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201407 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201425 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201440 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201460 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201476 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201510 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201526 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201544 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201562 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201578 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201610 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201649 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201705 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201762 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201813 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201828 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201845 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201863 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201895 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201941 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201953 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201963 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201972 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201982 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.201990 4820 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202001 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202010 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202019 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202028 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202037 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202047 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202043 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202056 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202141 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202192 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202214 4820 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202259 4820 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202273 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202286 4820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202300 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202316 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202329 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202741 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202743 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202778 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202887 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202902 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202915 4820 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202927 4820 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202940 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202952 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202965 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202978 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202991 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203004 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203016 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.202982 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203032 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203060 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203071 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203080 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203090 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203100 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203114 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203124 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203133 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203142 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203153 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203162 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203172 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203181 4820 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.203189 4820 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.206126 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.206646 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.206856 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.206795 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.207123 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.207281 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.207506 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.207992 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.208184 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.208264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.208362 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.208533 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.208512 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.209633 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.209709 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.209926 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.210156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.210325 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.210603 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.210832 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.210897 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.211231 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.211235 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.211396 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.211520 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.211725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.211806 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.213116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.211981 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212014 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212054 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212238 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212315 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212525 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212669 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212740 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212897 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.212939 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.213547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.213717 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.213793 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.213856 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.213932 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.213914 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.213963 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.214226 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.214353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.214490 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.214587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.214712 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.214844 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.214955 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.215453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.215520 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.215787 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.215827 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.215824 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.215861 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.215984 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.216015 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.216022 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.216082 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:19.716062388 +0000 UTC m=+21.236428692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.216081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.216282 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.216230 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.216360 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.216386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.216405 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.217019 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:19.716996402 +0000 UTC m=+21.237362746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.216416 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.216518 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.217263 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.217341 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.217628 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.217754 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.217813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.217950 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.218106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.218220 4820 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.219441 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.219498 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.220461 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.220521 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.220818 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.220974 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.221326 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.221168 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.221765 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.222179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.223488 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.223601 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.224079 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.224218 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.223434 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.224453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.225031 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.225909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.226405 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.228744 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.230884 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.231213 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.231507 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.231612 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.231616 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.232130 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.232366 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.232497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.232789 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.233033 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.233201 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.233225 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.233240 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.233289 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:19.733271662 +0000 UTC m=+21.253638016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.233201 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.233316 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.233325 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.233349 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:19.733341473 +0000 UTC m=+21.253707857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.233587 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.233959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.234361 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.235038 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.235305 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.235558 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.241592 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.241849 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.241961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.241978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.242179 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.242654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.243162 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.245640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.246432 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.252826 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253065 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253246 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253378 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253383 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253725 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253264 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253686 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253692 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253706 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.253974 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254002 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254327 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254733 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254858 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.254953 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.256679 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.256744 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.257041 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.257084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.257501 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.257577 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.257986 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.257924 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.266309 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.266454 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.266488 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.272037 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.283474 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.283619 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.301938 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.303843 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.303931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.303995 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304009 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304021 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304032 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304043 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304055 4820 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304066 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304076 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304087 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304097 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304107 4820 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304118 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304129 4820 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304139 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304183 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304196 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304208 4820 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304219 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304230 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304250 4820 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304261 4820 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304271 4820 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304281 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304291 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304301 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304312 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304322 4820 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304333 4820 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304343 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304355 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304365 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304375 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304385 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304396 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304406 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304416 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304427 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304437 4820 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304448 4820 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304458 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304468 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304478 4820 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304488 4820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304498 4820 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304509 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304519 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304547 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304559 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304569 4820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304580 4820 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304610 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304621 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304632 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304643 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304653 4820 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304663 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304674 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304684 4820 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304693 4820 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304704 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304715 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304726 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304736 4820 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304748 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304758 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304768 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304778 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304789 4820 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304800 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304812 4820 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304823 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304832 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304843 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304895 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304907 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304920 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304944 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304975 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.304990 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305000 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305010 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305020 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305031 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305060 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305072 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305084 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305095 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305105 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305115 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305145 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305155 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305165 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305176 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305186 4820 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305196 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305228 4820 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305240 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305396 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305410 4820 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305510 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305522 4820 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305532 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305542 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305575 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305586 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305596 4820 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305608 4820 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305619 4820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305649 4820 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305672 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305683 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305693 4820 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305703 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305733 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305744 4820 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305754 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305764 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305774 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305783 4820 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305792 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305821 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305831 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305840 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305850 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305861 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305871 4820 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305905 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305916 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305926 4820 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305936 4820 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305946 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305955 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305988 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.305999 4820 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306011 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306022 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306035 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306066 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306081 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306091 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306104 4820 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306114 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306124 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306152 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.306162 4820 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.308155 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.308353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.308516 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.310583 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d" exitCode=255 Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.310619 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d"} Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.337289 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.340088 4820 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.340341 4820 scope.go:117] "RemoveContainer" containerID="cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.363570 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.379945 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.391866 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.403746 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.414653 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.426789 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.438305 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.438488 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.440322 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-01 14:16:18 +0000 UTC, rotation deadline is 2026-12-21 10:50:05.065272753 +0000 UTC Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.440372 4820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7748h28m45.624903983s for next certificate rotation Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.449690 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.454803 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.458557 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.462861 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: W0201 14:21:19.476889 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5c4c1078aea950c0e6c1b600f679f9574e34b8dc8347497f4f9eea3cb97a3c37 WatchSource:0}: Error finding container 5c4c1078aea950c0e6c1b600f679f9574e34b8dc8347497f4f9eea3cb97a3c37: Status 404 returned error can't find the container with id 5c4c1078aea950c0e6c1b600f679f9574e34b8dc8347497f4f9eea3cb97a3c37 Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.477814 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: W0201 14:21:19.478571 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-2b597cbae37f4a5129d5b9a3a5bdc75c15bcc97d21ec0f5770afc070e1ac7f14 WatchSource:0}: Error finding container 2b597cbae37f4a5129d5b9a3a5bdc75c15bcc97d21ec0f5770afc070e1ac7f14: Status 404 returned error can't find the container with id 2b597cbae37f4a5129d5b9a3a5bdc75c15bcc97d21ec0f5770afc070e1ac7f14 Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.675407 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-r52d9"] Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.676703 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-r52d9" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.676922 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-l4chw"] Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.677169 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.680680 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.682601 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.682739 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.683972 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.684077 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.684105 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.684159 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.701746 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.709158 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.709291 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:21:20.70926551 +0000 UTC m=+22.229631794 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.715065 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.728843 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.743195 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.752096 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.763277 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.774597 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.785008 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.794630 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.805278 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.809990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.810024 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wdt\" (UniqueName: \"kubernetes.io/projected/099a4607-5e9e-42d7-926d-56372fd5f23a-kube-api-access-k8wdt\") pod \"node-resolver-r52d9\" (UID: \"099a4607-5e9e-42d7-926d-56372fd5f23a\") " pod="openshift-dns/node-resolver-r52d9" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.810044 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.810060 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bc978ee-ee67-4f69-8d91-361eb5b226fb-host\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.810079 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2bc978ee-ee67-4f69-8d91-361eb5b226fb-serviceca\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.810100 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq6tm\" (UniqueName: \"kubernetes.io/projected/2bc978ee-ee67-4f69-8d91-361eb5b226fb-kube-api-access-gq6tm\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.810123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.810147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.810172 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/099a4607-5e9e-42d7-926d-56372fd5f23a-hosts-file\") pod \"node-resolver-r52d9\" (UID: \"099a4607-5e9e-42d7-926d-56372fd5f23a\") " pod="openshift-dns/node-resolver-r52d9" Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810261 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810290 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810302 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810309 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810353 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:20.810328138 +0000 UTC m=+22.330694422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810409 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810442 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:20.81042416 +0000 UTC m=+22.330790514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810491 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810508 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810517 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810531 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:20.810523253 +0000 UTC m=+22.330889537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:19 crc kubenswrapper[4820]: E0201 14:21:19.810546 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:20.810537883 +0000 UTC m=+22.330904167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.815706 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.826591 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.834183 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.840932 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.849939 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.878813 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.900542 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.910947 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2bc978ee-ee67-4f69-8d91-361eb5b226fb-serviceca\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.910984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq6tm\" (UniqueName: \"kubernetes.io/projected/2bc978ee-ee67-4f69-8d91-361eb5b226fb-kube-api-access-gq6tm\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.911024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/099a4607-5e9e-42d7-926d-56372fd5f23a-hosts-file\") pod \"node-resolver-r52d9\" (UID: \"099a4607-5e9e-42d7-926d-56372fd5f23a\") " pod="openshift-dns/node-resolver-r52d9" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.911052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wdt\" (UniqueName: \"kubernetes.io/projected/099a4607-5e9e-42d7-926d-56372fd5f23a-kube-api-access-k8wdt\") pod \"node-resolver-r52d9\" (UID: \"099a4607-5e9e-42d7-926d-56372fd5f23a\") " pod="openshift-dns/node-resolver-r52d9" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.911080 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bc978ee-ee67-4f69-8d91-361eb5b226fb-host\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.911128 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bc978ee-ee67-4f69-8d91-361eb5b226fb-host\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.911133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/099a4607-5e9e-42d7-926d-56372fd5f23a-hosts-file\") pod \"node-resolver-r52d9\" (UID: \"099a4607-5e9e-42d7-926d-56372fd5f23a\") " pod="openshift-dns/node-resolver-r52d9" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.913325 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2bc978ee-ee67-4f69-8d91-361eb5b226fb-serviceca\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.934985 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wdt\" (UniqueName: \"kubernetes.io/projected/099a4607-5e9e-42d7-926d-56372fd5f23a-kube-api-access-k8wdt\") pod \"node-resolver-r52d9\" (UID: \"099a4607-5e9e-42d7-926d-56372fd5f23a\") " pod="openshift-dns/node-resolver-r52d9" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.937866 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq6tm\" (UniqueName: \"kubernetes.io/projected/2bc978ee-ee67-4f69-8d91-361eb5b226fb-kube-api-access-gq6tm\") pod \"node-ca-l4chw\" (UID: \"2bc978ee-ee67-4f69-8d91-361eb5b226fb\") " pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:19 crc kubenswrapper[4820]: I0201 14:21:19.991109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-r52d9" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.000381 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l4chw" Feb 01 14:21:20 crc kubenswrapper[4820]: W0201 14:21:20.044207 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod099a4607_5e9e_42d7_926d_56372fd5f23a.slice/crio-17067ee7172d0b6d5d19e80bf90681987763dcfd969120e04d94cecefbbc00f5 WatchSource:0}: Error finding container 17067ee7172d0b6d5d19e80bf90681987763dcfd969120e04d94cecefbbc00f5: Status 404 returned error can't find the container with id 17067ee7172d0b6d5d19e80bf90681987763dcfd969120e04d94cecefbbc00f5 Feb 01 14:21:20 crc kubenswrapper[4820]: W0201 14:21:20.044825 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bc978ee_ee67_4f69_8d91_361eb5b226fb.slice/crio-c3bc28a7344f4478c4610ebb59d63ddc83ee28d10b2bf1eff254ab931f403d9f WatchSource:0}: Error finding container c3bc28a7344f4478c4610ebb59d63ddc83ee28d10b2bf1eff254ab931f403d9f: Status 404 returned error can't find the container with id c3bc28a7344f4478c4610ebb59d63ddc83ee28d10b2bf1eff254ab931f403d9f Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.144852 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:28:39.948607624 +0000 UTC Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.313716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-r52d9" event={"ID":"099a4607-5e9e-42d7-926d-56372fd5f23a","Type":"ContainerStarted","Data":"17067ee7172d0b6d5d19e80bf90681987763dcfd969120e04d94cecefbbc00f5"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.315286 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.315338 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a220bf55ff475c9496ea4fe0db3641c4f80b0720e2e78582dc9ff37147bfd9a0"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.317732 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.320232 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.320441 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.322174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.322210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.322224 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2b597cbae37f4a5129d5b9a3a5bdc75c15bcc97d21ec0f5770afc070e1ac7f14"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.323240 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5c4c1078aea950c0e6c1b600f679f9574e34b8dc8347497f4f9eea3cb97a3c37"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.324082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l4chw" event={"ID":"2bc978ee-ee67-4f69-8d91-361eb5b226fb","Type":"ContainerStarted","Data":"c3bc28a7344f4478c4610ebb59d63ddc83ee28d10b2bf1eff254ab931f403d9f"} Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.326985 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.337459 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.347849 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.354718 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.368200 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.380298 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.403011 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.412183 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.420554 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.431917 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.439241 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.447992 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.461206 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.470987 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.477565 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zbtsv"] Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.478470 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m4skx"] Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.478735 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.479389 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-w8vbg"] Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.479602 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.479661 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-q922s"] Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.479782 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.480060 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.481473 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.484522 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.484762 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.485221 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.485679 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.485892 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.486036 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.487483 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.487796 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.488006 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.488405 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.488725 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.488791 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.489059 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.489079 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.489210 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.489266 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.489410 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.489524 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.489729 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.504283 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.520721 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.528638 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.539192 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.545947 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.556173 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.563960 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.571410 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.581690 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.598614 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615647 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-cni-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvl7\" (UniqueName: \"kubernetes.io/projected/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-kube-api-access-vbvl7\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-node-log\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615712 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-k8s-cni-cncf-io\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615729 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llz58\" (UniqueName: \"kubernetes.io/projected/20f8fae3-1755-461a-8748-a0033423ad5a-kube-api-access-llz58\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615751 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-env-overrides\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615767 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-system-cni-dir\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615782 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/060a9e0b-803f-4ccc-bed6-92614d449527-proxy-tls\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615798 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-etc-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615813 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-os-release\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615831 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-etc-kubernetes\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-systemd-units\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615861 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-netns\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-var-lib-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615915 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-config\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615932 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-socket-dir-parent\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-netns\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.615972 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-multus-certs\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-os-release\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616048 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cnibin\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616074 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/060a9e0b-803f-4ccc-bed6-92614d449527-rootfs\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616107 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-log-socket\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616128 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lp6j\" (UniqueName: \"kubernetes.io/projected/2c428279-629a-4fd5-9955-1598ed4f6f84-kube-api-access-6lp6j\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-bin\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616170 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c428279-629a-4fd5-9955-1598ed4f6f84-ovn-node-metrics-cert\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlddw\" (UniqueName: \"kubernetes.io/projected/060a9e0b-803f-4ccc-bed6-92614d449527-kube-api-access-qlddw\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616211 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-slash\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616227 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-systemd\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616240 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-kubelet\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616253 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20f8fae3-1755-461a-8748-a0033423ad5a-cni-binary-copy\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616281 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-cni-multus\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616372 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616420 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-ovn\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-ovn-kubernetes\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616469 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-cni-bin\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616491 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/20f8fae3-1755-461a-8748-a0033423ad5a-multus-daemon-config\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616513 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-kubelet\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616561 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-netd\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-script-lib\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616642 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-hostroot\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616676 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/060a9e0b-803f-4ccc-bed6-92614d449527-mcd-auth-proxy-config\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616725 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-conf-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-system-cni-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.616798 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-cnibin\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.642166 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.682377 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-systemd\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717355 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-bin\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c428279-629a-4fd5-9955-1598ed4f6f84-ovn-node-metrics-cert\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlddw\" (UniqueName: \"kubernetes.io/projected/060a9e0b-803f-4ccc-bed6-92614d449527-kube-api-access-qlddw\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717422 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-slash\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-kubelet\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717488 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-kubelet\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717493 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-bin\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.717516 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:21:22.717484391 +0000 UTC m=+24.237850685 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717523 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-slash\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20f8fae3-1755-461a-8748-a0033423ad5a-cni-binary-copy\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717675 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-cni-multus\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-systemd\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717819 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-cni-multus\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717888 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/20f8fae3-1755-461a-8748-a0033423ad5a-multus-daemon-config\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717934 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-ovn\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717956 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-ovn-kubernetes\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717975 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-cni-bin\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.717995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-script-lib\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718015 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-kubelet\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718035 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718038 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-ovn\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-netd\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718076 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/060a9e0b-803f-4ccc-bed6-92614d449527-mcd-auth-proxy-config\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-hostroot\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718117 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-cnibin\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718136 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-conf-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718145 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-kubelet\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-system-cni-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-ovn-kubernetes\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718196 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-cni-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-var-lib-cni-bin\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718219 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbvl7\" (UniqueName: \"kubernetes.io/projected/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-kube-api-access-vbvl7\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718242 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-k8s-cni-cncf-io\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-node-log\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718282 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-env-overrides\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718303 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llz58\" (UniqueName: \"kubernetes.io/projected/20f8fae3-1755-461a-8748-a0033423ad5a-kube-api-access-llz58\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718332 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-etc-kubernetes\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718351 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-system-cni-dir\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/060a9e0b-803f-4ccc-bed6-92614d449527-proxy-tls\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718391 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-etc-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718409 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-os-release\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718430 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-socket-dir-parent\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-systemd-units\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718470 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-netns\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20f8fae3-1755-461a-8748-a0033423ad5a-cni-binary-copy\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718489 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-var-lib-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718510 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-config\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718590 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-netns\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-multus-certs\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718632 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-os-release\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cnibin\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718746 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/060a9e0b-803f-4ccc-bed6-92614d449527-rootfs\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718818 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-log-socket\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lp6j\" (UniqueName: \"kubernetes.io/projected/2c428279-629a-4fd5-9955-1598ed4f6f84-kube-api-access-6lp6j\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.718977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719036 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-script-lib\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-socket-dir-parent\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-hostroot\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719225 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-netd\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719242 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-etc-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719261 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-cni-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-cnibin\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719351 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-multus-conf-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719351 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-os-release\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719417 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-system-cni-dir\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719457 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-netns\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-systemd-units\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719536 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-multus-certs\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719572 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-netns\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719687 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-config\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-var-lib-openvswitch\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-os-release\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719824 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-host-run-k8s-cni-cncf-io\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719850 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719869 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/060a9e0b-803f-4ccc-bed6-92614d449527-mcd-auth-proxy-config\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719907 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20f8fae3-1755-461a-8748-a0033423ad5a-etc-kubernetes\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-system-cni-dir\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-node-log\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719973 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/060a9e0b-803f-4ccc-bed6-92614d449527-rootfs\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.719982 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-cnibin\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.720012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-log-socket\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.720045 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/20f8fae3-1755-461a-8748-a0033423ad5a-multus-daemon-config\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.720094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-env-overrides\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.724087 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/060a9e0b-803f-4ccc-bed6-92614d449527-proxy-tls\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.725232 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c428279-629a-4fd5-9955-1598ed4f6f84-ovn-node-metrics-cert\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.735974 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.738340 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.748797 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlddw\" (UniqueName: \"kubernetes.io/projected/060a9e0b-803f-4ccc-bed6-92614d449527-kube-api-access-qlddw\") pod \"machine-config-daemon-w8vbg\" (UID: \"060a9e0b-803f-4ccc-bed6-92614d449527\") " pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.775197 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lp6j\" (UniqueName: \"kubernetes.io/projected/2c428279-629a-4fd5-9955-1598ed4f6f84-kube-api-access-6lp6j\") pod \"ovnkube-node-m4skx\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.796133 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbvl7\" (UniqueName: \"kubernetes.io/projected/e228d5b6-4ae4-4c56-b52d-d895d1e4ab67-kube-api-access-vbvl7\") pod \"multus-additional-cni-plugins-zbtsv\" (UID: \"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\") " pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.805169 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.814330 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.814511 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llz58\" (UniqueName: \"kubernetes.io/projected/20f8fae3-1755-461a-8748-a0033423ad5a-kube-api-access-llz58\") pod \"multus-q922s\" (UID: \"20f8fae3-1755-461a-8748-a0033423ad5a\") " pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.819232 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q922s" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.819456 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.819493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.819513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.819536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819638 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819687 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:22.819674127 +0000 UTC m=+24.340040411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819717 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819774 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:22.819755049 +0000 UTC m=+24.340121333 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819836 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819886 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819901 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819924 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819941 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819953 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:22.819934923 +0000 UTC m=+24.340301207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819956 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:20 crc kubenswrapper[4820]: E0201 14:21:20.819990 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:22.819981014 +0000 UTC m=+24.340347298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:20 crc kubenswrapper[4820]: W0201 14:21:20.822118 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c428279_629a_4fd5_9955_1598ed4f6f84.slice/crio-f6638deac942a682a1833bbecc88e4131a6825f9cddceeee12cc776f30370fa8 WatchSource:0}: Error finding container f6638deac942a682a1833bbecc88e4131a6825f9cddceeee12cc776f30370fa8: Status 404 returned error can't find the container with id f6638deac942a682a1833bbecc88e4131a6825f9cddceeee12cc776f30370fa8 Feb 01 14:21:20 crc kubenswrapper[4820]: W0201 14:21:20.840567 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20f8fae3_1755_461a_8748_a0033423ad5a.slice/crio-7cf112482140a3d9dc1f3648ffc6fae14fde6a6be717cedb689339cb7f56f0e6 WatchSource:0}: Error finding container 7cf112482140a3d9dc1f3648ffc6fae14fde6a6be717cedb689339cb7f56f0e6: Status 404 returned error can't find the container with id 7cf112482140a3d9dc1f3648ffc6fae14fde6a6be717cedb689339cb7f56f0e6 Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.843005 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.890170 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:20 crc kubenswrapper[4820]: I0201 14:21:20.929652 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.095211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" Feb 01 14:21:21 crc kubenswrapper[4820]: W0201 14:21:21.128039 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode228d5b6_4ae4_4c56_b52d_d895d1e4ab67.slice/crio-18abd93db72bf39be7d055b70f212ada67f363edc7c5a7445194bb66f1a74166 WatchSource:0}: Error finding container 18abd93db72bf39be7d055b70f212ada67f363edc7c5a7445194bb66f1a74166: Status 404 returned error can't find the container with id 18abd93db72bf39be7d055b70f212ada67f363edc7c5a7445194bb66f1a74166 Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.145703 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:46:38.21440744 +0000 UTC Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.198485 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.198529 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.198547 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:21 crc kubenswrapper[4820]: E0201 14:21:21.198636 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:21 crc kubenswrapper[4820]: E0201 14:21:21.198727 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:21 crc kubenswrapper[4820]: E0201 14:21:21.198804 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.208412 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.263891 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.264743 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.265351 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.265969 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.266492 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.267121 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.267683 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.269577 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.270081 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.270986 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.271632 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.272586 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.273139 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.277256 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.277799 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.278769 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.279335 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.279915 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.281245 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.281946 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.282589 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.283064 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.283757 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.284523 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.285049 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.285857 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.286647 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.287350 4820 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.287491 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.289427 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.292911 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.293483 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.295264 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.296538 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.297277 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.298580 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.299499 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.300279 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.301650 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.302762 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.303541 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.304644 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.305334 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.306763 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.307736 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.308920 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.309581 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.310305 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.311477 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.341900 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.348633 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.357285 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.367351 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.378919 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.380110 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.392228 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.407527 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.421560 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.434606 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.445595 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.460681 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.477696 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.490361 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.503466 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.522844 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.544778 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.545739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerStarted","Data":"18abd93db72bf39be7d055b70f212ada67f363edc7c5a7445194bb66f1a74166"} Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.545790 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"f6638deac942a682a1833bbecc88e4131a6825f9cddceeee12cc776f30370fa8"} Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.545812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l4chw" event={"ID":"2bc978ee-ee67-4f69-8d91-361eb5b226fb","Type":"ContainerStarted","Data":"7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09"} Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.545845 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q922s" event={"ID":"20f8fae3-1755-461a-8748-a0033423ad5a","Type":"ContainerStarted","Data":"7cf112482140a3d9dc1f3648ffc6fae14fde6a6be717cedb689339cb7f56f0e6"} Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.545865 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-r52d9" event={"ID":"099a4607-5e9e-42d7-926d-56372fd5f23a","Type":"ContainerStarted","Data":"6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31"} Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.545920 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"584783d59842125ed9b3c70cce528014282b8f0b83347c16a881cff61b508be0"} Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.565611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.603930 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.642330 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.688995 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.723579 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.764750 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.801411 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.842774 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.882886 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.923725 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.931491 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.935468 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.960533 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 01 14:21:21 crc kubenswrapper[4820]: I0201 14:21:21.983483 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.024177 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.064456 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.102379 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.143370 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.146071 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:37:22.976345385 +0000 UTC Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.181712 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.228421 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.262866 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.306237 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.341200 4820 generic.go:334] "Generic (PLEG): container finished" podID="e228d5b6-4ae4-4c56-b52d-d895d1e4ab67" containerID="37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30" exitCode=0 Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.341288 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerDied","Data":"37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30"} Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.342625 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90" exitCode=0 Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.342685 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90"} Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.343713 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q922s" event={"ID":"20f8fae3-1755-461a-8748-a0033423ad5a","Type":"ContainerStarted","Data":"48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7"} Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.345828 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca"} Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.345871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4"} Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.347087 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.347677 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7"} Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.382721 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.424868 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.462582 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.503469 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.542567 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.584555 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.626604 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.673762 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.715405 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.739716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.739899 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:21:26.73988554 +0000 UTC m=+28.260251824 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.742246 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.782977 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.830861 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.841078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.841124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.841145 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.841164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841247 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841271 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841285 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841295 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841300 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841320 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:26.841302247 +0000 UTC m=+28.361668531 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841376 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841400 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841411 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:26.841388269 +0000 UTC m=+28.361754623 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841416 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841430 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:26.841420179 +0000 UTC m=+28.361786463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:22 crc kubenswrapper[4820]: E0201 14:21:22.841498 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:26.84146072 +0000 UTC m=+28.361827084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.868159 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.907992 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.943012 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:22 crc kubenswrapper[4820]: I0201 14:21:22.984275 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:22Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.023586 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.065966 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.109752 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.147077 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:21:08.780840725 +0000 UTC Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.198642 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.198694 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.198656 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:23 crc kubenswrapper[4820]: E0201 14:21:23.198794 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:23 crc kubenswrapper[4820]: E0201 14:21:23.198895 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:23 crc kubenswrapper[4820]: E0201 14:21:23.198974 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.354116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerStarted","Data":"cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9"} Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.358010 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d"} Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.358073 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9"} Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.358091 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f"} Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.358108 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1"} Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.358125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7"} Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.358139 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6"} Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.373323 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.384542 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.396374 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.406155 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.420074 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.449508 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.464018 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.478053 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.498244 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.509163 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.542108 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.583929 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.625591 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:23 crc kubenswrapper[4820]: I0201 14:21:23.665653 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:23Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.147981 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:03:51.688879309 +0000 UTC Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.364136 4820 generic.go:334] "Generic (PLEG): container finished" podID="e228d5b6-4ae4-4c56-b52d-d895d1e4ab67" containerID="cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9" exitCode=0 Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.364179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerDied","Data":"cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9"} Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.380449 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.401629 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.412550 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.426344 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.438949 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.450935 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.465520 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.476808 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.489435 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.500211 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.517267 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.535863 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.546713 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.559222 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.791476 4820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.793130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.793170 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.793181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.793270 4820 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.801101 4820 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.801277 4820 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.802176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.802212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.802221 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.802234 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.802243 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:24Z","lastTransitionTime":"2026-02-01T14:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:24 crc kubenswrapper[4820]: E0201 14:21:24.818641 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.823525 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.823573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.823584 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.823601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.823611 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:24Z","lastTransitionTime":"2026-02-01T14:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:24 crc kubenswrapper[4820]: E0201 14:21:24.837040 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.840500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.840624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.840718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.840806 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.840900 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:24Z","lastTransitionTime":"2026-02-01T14:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:24 crc kubenswrapper[4820]: E0201 14:21:24.854069 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.856957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.856991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.857001 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.857020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.857031 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:24Z","lastTransitionTime":"2026-02-01T14:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:24 crc kubenswrapper[4820]: E0201 14:21:24.868992 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.871959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.872075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.872136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.872198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.872259 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:24Z","lastTransitionTime":"2026-02-01T14:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:24 crc kubenswrapper[4820]: E0201 14:21:24.884125 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:24Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:24 crc kubenswrapper[4820]: E0201 14:21:24.884339 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.887567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.887601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.887612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.887626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.887636 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:24Z","lastTransitionTime":"2026-02-01T14:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.990343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.990383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.990393 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.990408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:24 crc kubenswrapper[4820]: I0201 14:21:24.990419 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:24Z","lastTransitionTime":"2026-02-01T14:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.092642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.092680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.092690 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.092704 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.092714 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.113673 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.148996 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:47:57.793999449 +0000 UTC Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.195189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.195228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.195239 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.195258 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.195276 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.197782 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.197986 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.198041 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:25 crc kubenswrapper[4820]: E0201 14:21:25.198037 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:25 crc kubenswrapper[4820]: E0201 14:21:25.198129 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:25 crc kubenswrapper[4820]: E0201 14:21:25.198343 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.297263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.297647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.297660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.297678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.297691 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.370853 4820 generic.go:334] "Generic (PLEG): container finished" podID="e228d5b6-4ae4-4c56-b52d-d895d1e4ab67" containerID="1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f" exitCode=0 Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.370955 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerDied","Data":"1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.389215 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.400976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.401035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.401066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.401092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.401110 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.408267 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.429071 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.444396 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.458408 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.483090 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.495119 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.499073 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.503254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.503284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.503292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.503305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.503313 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.509304 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.524374 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.543869 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.557133 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.569951 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.581746 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.590997 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:25Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.605859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.605931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.606013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.606041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.606057 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.708947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.708990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.709010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.709027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.709037 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.811431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.811464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.811472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.811485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.811494 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.913777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.913812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.913820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.913835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:25 crc kubenswrapper[4820]: I0201 14:21:25.913844 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:25Z","lastTransitionTime":"2026-02-01T14:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.016809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.016894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.016912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.016930 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.016940 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.120838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.120935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.120953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.120978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.120995 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.149566 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:31:03.829022436 +0000 UTC Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.222766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.222836 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.222850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.222912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.222925 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.325453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.325519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.325537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.325561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.325579 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.379617 4820 generic.go:334] "Generic (PLEG): container finished" podID="e228d5b6-4ae4-4c56-b52d-d895d1e4ab67" containerID="e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c" exitCode=0 Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.379678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerDied","Data":"e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.386432 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.402810 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.425099 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.428726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.428759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.428772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.428787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.428796 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.447783 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.460927 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.463000 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.481039 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.496024 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.511800 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.524252 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.530762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.530795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.530807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.530856 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.530890 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.537214 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.548338 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.566853 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.581948 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.595764 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.611484 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.633493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.633530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.633542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.633559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.633571 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.736204 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.736249 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.736260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.736281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.736293 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.778937 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.779147 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:21:34.779119841 +0000 UTC m=+36.299486125 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.839317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.839395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.839410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.839431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.839453 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.880260 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.880316 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.880341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.880376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880491 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880507 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880546 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:34.880528717 +0000 UTC m=+36.400894991 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880553 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880571 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880591 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880634 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880659 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880678 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880640 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:34.880621479 +0000 UTC m=+36.400987763 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880767 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:34.880712431 +0000 UTC m=+36.401078715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:26 crc kubenswrapper[4820]: E0201 14:21:26.880793 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:34.880785633 +0000 UTC m=+36.401151917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.942469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.942526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.942543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.942569 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:26 crc kubenswrapper[4820]: I0201 14:21:26.942587 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:26Z","lastTransitionTime":"2026-02-01T14:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.046090 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.046162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.046181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.046206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.046225 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.148997 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.149034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.149043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.149055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.149066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.150323 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:24:51.711280661 +0000 UTC Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.198763 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.198838 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:27 crc kubenswrapper[4820]: E0201 14:21:27.198931 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.199379 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:27 crc kubenswrapper[4820]: E0201 14:21:27.199497 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:27 crc kubenswrapper[4820]: E0201 14:21:27.199568 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.252725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.252791 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.252808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.252844 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.252862 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.356367 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.356425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.356444 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.356472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.356493 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.393300 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerStarted","Data":"684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.411381 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.431254 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.449422 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.458925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.458984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.459002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.459049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.459068 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.471275 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.495199 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.518493 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.536485 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.549955 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.563709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.563757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.563770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.563785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.563799 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.577224 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.605259 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.623917 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.639261 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.658680 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.667486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.667539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.667557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.667580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.667599 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.674690 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:27Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.771243 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.771291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.771305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.771325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.774560 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.877821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.878248 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.878267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.878291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.878310 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.981369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.981441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.981465 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.981494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:27 crc kubenswrapper[4820]: I0201 14:21:27.981516 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:27Z","lastTransitionTime":"2026-02-01T14:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.084506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.084572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.084592 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.084614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.084632 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.150848 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:14:59.073119594 +0000 UTC Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.186955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.187020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.187037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.187061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.187078 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.290094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.290147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.290160 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.290180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.290191 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.393044 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.393099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.393117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.393139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.393158 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.400851 4820 generic.go:334] "Generic (PLEG): container finished" podID="e228d5b6-4ae4-4c56-b52d-d895d1e4ab67" containerID="684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46" exitCode=0 Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.401058 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerDied","Data":"684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.424265 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.446509 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.464291 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.483177 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.496624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.496672 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.496689 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.496713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.496730 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.504071 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.528427 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.549078 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.564696 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.583571 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.599474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.599521 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.599531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.599548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.599561 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.601795 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.613121 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.623474 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.635772 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.644038 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:28Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.701683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.701720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.701734 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.701752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.701765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.804220 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.804271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.804293 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.804321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.804344 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.907176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.907231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.907247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.907269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:28 crc kubenswrapper[4820]: I0201 14:21:28.907285 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:28Z","lastTransitionTime":"2026-02-01T14:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.010688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.010739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.010752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.010774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.010793 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.112649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.112719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.112732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.112746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.112756 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.151290 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:11:05.832155662 +0000 UTC Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.198814 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.198955 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:29 crc kubenswrapper[4820]: E0201 14:21:29.198991 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:29 crc kubenswrapper[4820]: E0201 14:21:29.199045 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.199132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:29 crc kubenswrapper[4820]: E0201 14:21:29.199217 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.212319 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.215182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.215241 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.215262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.215287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.215303 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.225927 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.238993 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.253179 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.267144 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.282043 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.295002 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.307023 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.315429 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.317113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.317238 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.317308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.317367 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.317419 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.332195 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.344896 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.357515 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.370084 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.380965 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.407291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.408137 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.408197 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.412068 4820 generic.go:334] "Generic (PLEG): container finished" podID="e228d5b6-4ae4-4c56-b52d-d895d1e4ab67" containerID="9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4" exitCode=0 Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.412116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerDied","Data":"9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.421222 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.422948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.423003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.423019 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.423040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.423058 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.433200 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.445679 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.454357 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.454588 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.456329 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.467919 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.482789 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.493025 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.505260 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.517480 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.525218 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.525259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.525270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.525288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.525302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.532021 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.544949 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.559843 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.576122 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.603688 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.616937 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.628671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.628713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.628724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.628743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.628756 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.631070 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.642947 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.651664 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.664006 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.674472 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.684323 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.696688 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.706124 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.720207 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.731423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.731479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.731492 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.731511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.731525 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.733777 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.746405 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.783579 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.798392 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.834146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.834183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.834193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.834206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.834214 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.936420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.936466 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.936477 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.936496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:29 crc kubenswrapper[4820]: I0201 14:21:29.936514 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:29Z","lastTransitionTime":"2026-02-01T14:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.039385 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.039436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.039453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.039474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.039489 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.141228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.141274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.141286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.141302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.141313 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.151465 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:38:39.750814239 +0000 UTC Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.243473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.243514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.243525 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.243540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.243552 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.345677 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.345708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.345716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.345748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.345757 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.419429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" event={"ID":"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67","Type":"ContainerStarted","Data":"92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.419524 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.444637 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.446548 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.447952 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.447996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.448013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.448035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.448054 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.468735 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.506123 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.516678 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.525790 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.535890 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.543945 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.550259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.550293 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.550307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.550399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.550411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.555894 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.567301 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.578227 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.588085 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.604753 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.615199 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.625664 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:30Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.651966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.652004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.652014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.652029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.652041 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.754859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.754947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.754964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.754988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.755008 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.856822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.856850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.856859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.856874 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.856883 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.959127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.959156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.959164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.959178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:30 crc kubenswrapper[4820]: I0201 14:21:30.959188 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:30Z","lastTransitionTime":"2026-02-01T14:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.061564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.061604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.061614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.061629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.061637 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.152130 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:53:21.693269135 +0000 UTC Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.163547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.163586 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.163596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.163612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.163624 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.198387 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.198449 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.198387 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:31 crc kubenswrapper[4820]: E0201 14:21:31.198512 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:31 crc kubenswrapper[4820]: E0201 14:21:31.198587 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:31 crc kubenswrapper[4820]: E0201 14:21:31.198650 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.266251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.266317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.266334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.266359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.266377 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.368921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.368989 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.369008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.369032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.369049 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.421635 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.472070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.472129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.472147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.472170 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.472187 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.575051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.575092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.575104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.575121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.575133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.680518 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.680576 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.680591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.680617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.680633 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.783934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.783990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.784001 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.784017 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.784030 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.886402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.886449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.886459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.886473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.886482 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.988846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.988890 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.988902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.988937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:31 crc kubenswrapper[4820]: I0201 14:21:31.988950 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:31Z","lastTransitionTime":"2026-02-01T14:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.091593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.091642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.091655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.091674 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.091686 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.152510 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:30:47.390500134 +0000 UTC Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.221595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.221636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.221650 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.221666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.221679 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.317489 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h"] Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.318394 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.320437 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.321552 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.324184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.324250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.324272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.324296 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.324318 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.338075 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.342535 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whsvh\" (UniqueName: \"kubernetes.io/projected/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-kube-api-access-whsvh\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.342602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.342697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.342752 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.351615 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.368464 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.383553 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.398163 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.426330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.426588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.426657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.426765 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.426932 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.429151 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.443091 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.443777 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whsvh\" (UniqueName: \"kubernetes.io/projected/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-kube-api-access-whsvh\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.444190 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.444846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.445830 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.444789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.446450 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.450634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.459941 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.460109 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whsvh\" (UniqueName: \"kubernetes.io/projected/eab0d2e6-b6a3-4167-83a8-9a1e4662fa38-kube-api-access-whsvh\") pod \"ovnkube-control-plane-749d76644c-ptp6h\" (UID: \"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.473677 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.487194 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.497944 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.509212 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.519282 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.529036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.529073 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.529101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.529115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.529124 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.530227 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.540717 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:32Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.630830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.631268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.631442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.631599 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.631745 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.632720 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" Feb 01 14:21:32 crc kubenswrapper[4820]: W0201 14:21:32.644920 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeab0d2e6_b6a3_4167_83a8_9a1e4662fa38.slice/crio-520e2ff5eaf204c2393d6c096a06ae5d4523d4791ae09c8c35355150aebce1a0 WatchSource:0}: Error finding container 520e2ff5eaf204c2393d6c096a06ae5d4523d4791ae09c8c35355150aebce1a0: Status 404 returned error can't find the container with id 520e2ff5eaf204c2393d6c096a06ae5d4523d4791ae09c8c35355150aebce1a0 Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.734392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.734761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.734773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.734790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.734811 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.837858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.838061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.838148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.838198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.838211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.941629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.941662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.941700 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.941716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:32 crc kubenswrapper[4820]: I0201 14:21:32.941728 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:32Z","lastTransitionTime":"2026-02-01T14:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.043759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.043789 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.043797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.043809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.043818 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.146128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.146172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.146182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.146196 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.146206 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.152673 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:48:00.499027499 +0000 UTC Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.198356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.198397 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.198421 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:33 crc kubenswrapper[4820]: E0201 14:21:33.198526 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:33 crc kubenswrapper[4820]: E0201 14:21:33.198581 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:33 crc kubenswrapper[4820]: E0201 14:21:33.198650 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.249164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.249368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.249445 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.249531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.249605 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.407273 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.407314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.407323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.407339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.407348 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.429187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" event={"ID":"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38","Type":"ContainerStarted","Data":"520e2ff5eaf204c2393d6c096a06ae5d4523d4791ae09c8c35355150aebce1a0"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.510256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.510294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.510304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.510321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.510331 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.612397 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.612438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.612450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.612467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.612479 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.714680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.714990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.715167 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.715358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.715529 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.781927 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dj7sg"] Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.782503 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:33 crc kubenswrapper[4820]: E0201 14:21:33.782663 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.798654 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.809135 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smpkd\" (UniqueName: \"kubernetes.io/projected/8befd56b-2ebe-48c7-9027-4f906b2e09d5-kube-api-access-smpkd\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.809242 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.814340 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.817777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.817845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.817870 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.817937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.817960 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.831191 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.850369 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.865369 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.879360 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.893300 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.909794 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smpkd\" (UniqueName: \"kubernetes.io/projected/8befd56b-2ebe-48c7-9027-4f906b2e09d5-kube-api-access-smpkd\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.909870 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:33 crc kubenswrapper[4820]: E0201 14:21:33.910015 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:33 crc kubenswrapper[4820]: E0201 14:21:33.910070 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs podName:8befd56b-2ebe-48c7-9027-4f906b2e09d5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:34.410053834 +0000 UTC m=+35.930420128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs") pod "network-metrics-daemon-dj7sg" (UID: "8befd56b-2ebe-48c7-9027-4f906b2e09d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.911935 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.920517 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.920555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.920564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.920579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.920589 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:33Z","lastTransitionTime":"2026-02-01T14:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.925379 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.933603 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smpkd\" (UniqueName: \"kubernetes.io/projected/8befd56b-2ebe-48c7-9027-4f906b2e09d5-kube-api-access-smpkd\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.938335 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.949159 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.964601 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.976099 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:33 crc kubenswrapper[4820]: I0201 14:21:33.998487 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:33Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.013171 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.023446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.023518 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.023532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.023553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.023564 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.028020 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.127219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.127309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.127334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.127365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.127388 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.153497 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:02:12.489713822 +0000 UTC Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.230498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.230554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.230595 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.230618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.230634 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.333835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.333866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.333889 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.333907 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.333916 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.415990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.416435 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.416506 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs podName:8befd56b-2ebe-48c7-9027-4f906b2e09d5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:35.416484231 +0000 UTC m=+36.936850555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs") pod "network-metrics-daemon-dj7sg" (UID: "8befd56b-2ebe-48c7-9027-4f906b2e09d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.435295 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/0.log" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.435733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.435779 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.435792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.435811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.435824 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.438353 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa" exitCode=1 Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.438456 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.439236 4820 scope.go:117] "RemoveContainer" containerID="4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.441346 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" event={"ID":"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38","Type":"ContainerStarted","Data":"0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.454556 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.468547 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.480383 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.494132 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.507081 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.530774 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"message\\\":\\\"dler 8 for removal\\\\nI0201 14:21:32.990140 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 14:21:32.990154 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 14:21:32.990165 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 14:21:32.990245 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 14:21:32.990310 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 14:21:32.990323 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 14:21:32.990331 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0201 14:21:32.990339 6099 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:32.990347 6099 handler.go:208] Removed *v1.Node event handler 7\\\\nI0201 14:21:32.990362 6099 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 14:21:32.990369 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 14:21:32.990376 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 14:21:32.990441 6099 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990542 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990996 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.538390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.538415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.538423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.538437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.538449 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.541406 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.552234 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.563097 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.579607 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.589234 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.600052 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.611848 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.623564 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.638017 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.640857 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.640938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.640951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.640967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.640978 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.651211 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.744214 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.744263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.744276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.744295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.744308 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.803335 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.822021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.822232 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:21:50.82220413 +0000 UTC m=+52.342570454 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.831973 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.849346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.849384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.849401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.849424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.849442 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.851916 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.870003 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.885196 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.897142 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.906350 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.920526 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.923313 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.923354 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.923382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.923425 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923492 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923642 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:50.923626076 +0000 UTC m=+52.443992370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923541 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923736 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:50.923719389 +0000 UTC m=+52.444085673 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923557 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923761 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923772 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923795 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:50.923789101 +0000 UTC m=+52.444155385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923568 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923808 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923815 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:34 crc kubenswrapper[4820]: E0201 14:21:34.923831 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:50.923826942 +0000 UTC m=+52.444193226 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.932459 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.942465 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.951965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.951994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.952004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.952018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.952028 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:34Z","lastTransitionTime":"2026-02-01T14:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.958370 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"message\\\":\\\"dler 8 for removal\\\\nI0201 14:21:32.990140 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 14:21:32.990154 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 14:21:32.990165 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 14:21:32.990245 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 14:21:32.990310 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 14:21:32.990323 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 14:21:32.990331 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0201 14:21:32.990339 6099 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:32.990347 6099 handler.go:208] Removed *v1.Node event handler 7\\\\nI0201 14:21:32.990362 6099 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 14:21:32.990369 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 14:21:32.990376 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 14:21:32.990441 6099 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990542 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990996 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.967420 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.977981 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.988915 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:34 crc kubenswrapper[4820]: I0201 14:21:34.999540 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:34Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.015453 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.026997 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.053794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.053820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.053828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.053840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.053848 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.154146 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:04:30.480839034 +0000 UTC Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.156198 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.156229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.156237 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.156251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.156260 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.198770 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.198770 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.198932 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.199062 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.199047 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.199200 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.199323 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.199436 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.264675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.264740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.264758 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.264782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.264801 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.269015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.269074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.269093 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.269116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.269134 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.289717 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.294095 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.294138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.294155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.294178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.294195 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.311343 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.316052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.316165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.316189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.316218 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.316241 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.334920 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.339289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.339345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.339362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.339388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.339404 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.353844 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.358205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.358235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.358244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.358259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.358270 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.377601 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.377712 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.379296 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.379344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.379356 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.379374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.379387 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.428530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.428653 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:35 crc kubenswrapper[4820]: E0201 14:21:35.428699 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs podName:8befd56b-2ebe-48c7-9027-4f906b2e09d5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:37.4286872 +0000 UTC m=+38.949053484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs") pod "network-metrics-daemon-dj7sg" (UID: "8befd56b-2ebe-48c7-9027-4f906b2e09d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.447792 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/0.log" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.451160 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.451293 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.453724 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" event={"ID":"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38","Type":"ContainerStarted","Data":"c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.474086 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.481699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.481757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.481776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.481800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.481818 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.501202 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.518158 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.545236 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.578165 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.583898 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.583959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.583971 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.583987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.583997 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.603732 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.616566 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.628512 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.638465 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.657527 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"message\\\":\\\"dler 8 for removal\\\\nI0201 14:21:32.990140 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 14:21:32.990154 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 14:21:32.990165 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 14:21:32.990245 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 14:21:32.990310 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 14:21:32.990323 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 14:21:32.990331 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0201 14:21:32.990339 6099 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:32.990347 6099 handler.go:208] Removed *v1.Node event handler 7\\\\nI0201 14:21:32.990362 6099 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 14:21:32.990369 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 14:21:32.990376 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 14:21:32.990441 6099 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990542 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990996 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.667309 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.679985 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.689117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.689543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.689558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.689580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.689590 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.697963 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.710179 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.723024 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.735577 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.748878 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.761279 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.773740 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.786189 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.791747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.791786 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.791794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.791810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.791820 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.795220 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.809306 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.826454 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.838195 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.848622 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.864738 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"message\\\":\\\"dler 8 for removal\\\\nI0201 14:21:32.990140 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 14:21:32.990154 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 14:21:32.990165 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 14:21:32.990245 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 14:21:32.990310 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 14:21:32.990323 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 14:21:32.990331 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0201 14:21:32.990339 6099 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:32.990347 6099 handler.go:208] Removed *v1.Node event handler 7\\\\nI0201 14:21:32.990362 6099 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 14:21:32.990369 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 14:21:32.990376 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 14:21:32.990441 6099 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990542 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990996 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.873803 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.886067 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.894011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.894049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.894064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.894080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.894093 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.899066 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.909767 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.918825 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.935663 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.996185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.996219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.996227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.996241 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:35 crc kubenswrapper[4820]: I0201 14:21:35.996251 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:35Z","lastTransitionTime":"2026-02-01T14:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.098533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.098577 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.098587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.098601 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.098611 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.154857 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:26:10.529925251 +0000 UTC Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.201020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.201064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.201074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.201087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.201097 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.304123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.304192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.304212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.304235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.304256 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.324038 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.406955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.407010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.407027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.407049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.407068 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.460240 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/1.log" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.461068 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/0.log" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.464605 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757" exitCode=1 Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.465651 4820 scope.go:117] "RemoveContainer" containerID="50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757" Feb 01 14:21:36 crc kubenswrapper[4820]: E0201 14:21:36.465790 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.465832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.465868 4820 scope.go:117] "RemoveContainer" containerID="4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.482087 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.494550 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.507745 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.509967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.510025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.510047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.510075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.510096 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.529263 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.542941 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.560307 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.575829 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.591910 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.613277 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.613621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.613853 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.614116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.614386 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.614527 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.627571 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.658276 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4410a10f10daed5e269510a89e2187288e83aca5c6a930b299e95a69002869fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"message\\\":\\\"dler 8 for removal\\\\nI0201 14:21:32.990140 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 14:21:32.990154 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 14:21:32.990165 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 14:21:32.990245 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 14:21:32.990310 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 14:21:32.990323 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 14:21:32.990331 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0201 14:21:32.990339 6099 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:32.990347 6099 handler.go:208] Removed *v1.Node event handler 7\\\\nI0201 14:21:32.990362 6099 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 14:21:32.990369 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 14:21:32.990376 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 14:21:32.990441 6099 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990542 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 14:21:32.990996 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:36Z\\\",\\\"message\\\":\\\"l for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0201 14:21:35.960316 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-r52d9 after 0 failed attempt(s)\\\\nI0201 14:21:35.960319 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-r52d9\\\\nI0201 14:21:35.960249 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960335 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960339 6318 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zbtsv in node crc\\\\nI0201 14:21:35.960344 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv after 0 failed attempt(s)\\\\nI0201 14:21:35.960348 6318 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbt\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.678584 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.694394 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.707307 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.717162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.717209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.717220 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.717238 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.717251 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.722589 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.739064 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:36Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.820127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.820169 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.820181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.820199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.820211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.922758 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.922812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.922820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.922835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:36 crc kubenswrapper[4820]: I0201 14:21:36.922844 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:36Z","lastTransitionTime":"2026-02-01T14:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.025418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.025447 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.025455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.025468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.025476 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.128702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.128765 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.128790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.128821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.128842 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.155938 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:34:11.053961357 +0000 UTC Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.198195 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.198324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:37 crc kubenswrapper[4820]: E0201 14:21:37.198541 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.198630 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.198679 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:37 crc kubenswrapper[4820]: E0201 14:21:37.198771 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:37 crc kubenswrapper[4820]: E0201 14:21:37.198861 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:37 crc kubenswrapper[4820]: E0201 14:21:37.199146 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.231664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.231979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.232082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.232185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.232282 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.335362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.335391 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.335399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.335424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.335433 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.438224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.438462 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.438569 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.438666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.438753 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.449835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:37 crc kubenswrapper[4820]: E0201 14:21:37.450018 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:37 crc kubenswrapper[4820]: E0201 14:21:37.450105 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs podName:8befd56b-2ebe-48c7-9027-4f906b2e09d5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:41.450082655 +0000 UTC m=+42.970448969 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs") pod "network-metrics-daemon-dj7sg" (UID: "8befd56b-2ebe-48c7-9027-4f906b2e09d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.469953 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/1.log" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.473526 4820 scope.go:117] "RemoveContainer" containerID="50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757" Feb 01 14:21:37 crc kubenswrapper[4820]: E0201 14:21:37.473847 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.489003 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.500237 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.515442 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.529929 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.541516 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.541714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.541730 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.541811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.541823 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.543396 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.554520 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.569287 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.587436 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.602652 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.616840 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.629802 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.640020 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.643631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.643656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.643665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.643693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.643706 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.659931 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:36Z\\\",\\\"message\\\":\\\"l for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0201 14:21:35.960316 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-r52d9 after 0 failed attempt(s)\\\\nI0201 14:21:35.960319 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-r52d9\\\\nI0201 14:21:35.960249 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960335 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960339 6318 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zbtsv in node crc\\\\nI0201 14:21:35.960344 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv after 0 failed attempt(s)\\\\nI0201 14:21:35.960348 6318 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbt\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.668705 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.681465 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.694409 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:37Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.745967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.746022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.746032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.746047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.746058 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.848574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.848605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.848616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.848634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.848644 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.951310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.951572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.951701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.951780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:37 crc kubenswrapper[4820]: I0201 14:21:37.951845 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:37Z","lastTransitionTime":"2026-02-01T14:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.055305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.055348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.055370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.055388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.055398 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.156049 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:03:56.352665295 +0000 UTC Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.157450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.157480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.157489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.157500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.157510 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.260433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.260478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.260487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.260500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.260511 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.363812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.363853 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.363861 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.363887 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.363896 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.466362 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.466444 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.466478 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.466510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.466532 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.569225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.569265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.569274 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.569288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.569297 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.672504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.672545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.672554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.672568 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.672577 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.775220 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.775266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.775279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.775295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.775307 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.877979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.878029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.878043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.878065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.878079 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.981078 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.981124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.981133 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.981189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:38 crc kubenswrapper[4820]: I0201 14:21:38.981201 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:38Z","lastTransitionTime":"2026-02-01T14:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.084244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.084291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.084299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.084312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.084321 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.156660 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:37:26.629662465 +0000 UTC Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.186906 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.187028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.187043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.187061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.187074 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.198437 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.198533 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.198440 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:39 crc kubenswrapper[4820]: E0201 14:21:39.198582 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.198610 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:39 crc kubenswrapper[4820]: E0201 14:21:39.198670 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:39 crc kubenswrapper[4820]: E0201 14:21:39.198809 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:39 crc kubenswrapper[4820]: E0201 14:21:39.199027 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.213372 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.229949 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.243006 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.260160 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.274701 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.290082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.290128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.290146 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.290169 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.290190 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.293585 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.313788 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.336740 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.354507 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.375615 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.393342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.393396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.393418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.393447 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.393471 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.396190 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.421766 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.438455 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.462578 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.485405 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:36Z\\\",\\\"message\\\":\\\"l for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0201 14:21:35.960316 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-r52d9 after 0 failed attempt(s)\\\\nI0201 14:21:35.960319 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-r52d9\\\\nI0201 14:21:35.960249 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960335 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960339 6318 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zbtsv in node crc\\\\nI0201 14:21:35.960344 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv after 0 failed attempt(s)\\\\nI0201 14:21:35.960348 6318 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbt\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.495178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.495200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.495233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.495245 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.495253 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.498659 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:39Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.597909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.597953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.597966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.597984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.597999 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.701131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.701273 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.701348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.701424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.701456 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.804172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.804224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.804235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.804251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.804261 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.906978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.907030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.907043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.907061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:39 crc kubenswrapper[4820]: I0201 14:21:39.907074 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:39Z","lastTransitionTime":"2026-02-01T14:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.009899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.009937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.009946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.009960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.009971 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.112893 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.112930 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.112940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.112954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.112963 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.157555 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:25:10.438609223 +0000 UTC Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.215351 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.215383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.215392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.215405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.215414 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.317178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.317215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.317224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.317238 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.317247 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.419784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.419824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.419833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.419848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.419859 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.523450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.523493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.523509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.523526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.523538 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.625904 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.625942 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.625952 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.625966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.625977 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.728197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.728233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.728241 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.728254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.728274 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.876591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.876636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.876648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.876663 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.876675 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.979052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.979130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.979142 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.979161 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:40 crc kubenswrapper[4820]: I0201 14:21:40.979174 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:40Z","lastTransitionTime":"2026-02-01T14:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.081579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.081617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.081628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.081644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.081657 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.158333 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:28:40.364934189 +0000 UTC Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.183665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.183711 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.183726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.183745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.183762 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.198376 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.198437 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.198939 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:41 crc kubenswrapper[4820]: E0201 14:21:41.198978 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.199032 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:41 crc kubenswrapper[4820]: E0201 14:21:41.199179 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:41 crc kubenswrapper[4820]: E0201 14:21:41.199305 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:41 crc kubenswrapper[4820]: E0201 14:21:41.199398 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.286475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.286511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.286520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.286534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.286544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.389575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.389951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.390137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.390310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.390492 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.488903 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:41 crc kubenswrapper[4820]: E0201 14:21:41.489204 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:41 crc kubenswrapper[4820]: E0201 14:21:41.489295 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs podName:8befd56b-2ebe-48c7-9027-4f906b2e09d5 nodeName:}" failed. No retries permitted until 2026-02-01 14:21:49.489280914 +0000 UTC m=+51.009647198 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs") pod "network-metrics-daemon-dj7sg" (UID: "8befd56b-2ebe-48c7-9027-4f906b2e09d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.492129 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.492165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.492174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.492187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.492197 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.594211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.594250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.594260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.594277 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.594290 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.696757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.696809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.696818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.696833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.696842 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.799653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.799707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.799721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.799736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.799746 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.902069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.902103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.902114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.902127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:41 crc kubenswrapper[4820]: I0201 14:21:41.902135 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:41Z","lastTransitionTime":"2026-02-01T14:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.004554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.004587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.004597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.004610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.004622 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.107030 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.107081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.107091 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.107103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.107112 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.158790 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:52:57.35104409 +0000 UTC Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.210092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.210252 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.210278 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.210300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.210400 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.312602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.312678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.312696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.312726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.312748 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.415045 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.415350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.415479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.415605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.415722 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.518221 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.518251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.518263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.518279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.518289 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.621080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.621446 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.621691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.621951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.622152 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.724754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.724800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.724813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.724835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.724848 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.828184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.828245 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.828267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.828298 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.828325 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.930836 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.930939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.930960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.930985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:42 crc kubenswrapper[4820]: I0201 14:21:42.931003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:42Z","lastTransitionTime":"2026-02-01T14:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.033687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.034002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.034152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.034256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.034351 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.137292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.137333 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.137371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.137388 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.137401 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.198223 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:05:45.893411675 +0000 UTC Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.198370 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:43 crc kubenswrapper[4820]: E0201 14:21:43.198519 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.198640 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:43 crc kubenswrapper[4820]: E0201 14:21:43.198772 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.198825 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.198849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:43 crc kubenswrapper[4820]: E0201 14:21:43.198933 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:43 crc kubenswrapper[4820]: E0201 14:21:43.199117 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.240119 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.240182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.240202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.240222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.240236 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.343257 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.343309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.343324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.343355 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.343373 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.445600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.445649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.445661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.445680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.445692 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.547793 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.547850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.547864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.547916 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.547935 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.651178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.651222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.651231 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.651247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.651255 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.754150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.754184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.754192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.754206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.754216 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.856272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.856314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.856324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.856342 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.856353 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.959156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.959194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.959205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.959221 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:43 crc kubenswrapper[4820]: I0201 14:21:43.959230 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:43Z","lastTransitionTime":"2026-02-01T14:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.061083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.061132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.061144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.061164 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.061177 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.163939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.163989 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.163999 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.164014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.164025 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.198507 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:40:28.472468543 +0000 UTC Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.266760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.266792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.266802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.266818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.266829 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.368415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.368462 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.368470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.368482 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.368491 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.470994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.471033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.471042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.471056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.471066 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.572955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.572994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.573010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.573026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.573038 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.675390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.675452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.675464 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.675479 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.675490 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.778294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.778332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.778343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.778360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.778371 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.880649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.880688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.880697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.880722 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.880734 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.983054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.983105 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.983122 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.983140 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:44 crc kubenswrapper[4820]: I0201 14:21:44.983154 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:44Z","lastTransitionTime":"2026-02-01T14:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.086469 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.086523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.086538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.086556 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.086568 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.189894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.189967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.189980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.190007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.190025 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.198348 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.198395 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.198428 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.198365 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.198567 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.198616 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:04:21.010742143 +0000 UTC Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.198704 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.198862 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.199013 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.292986 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.293051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.293074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.293101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.293120 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.395931 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.395985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.395999 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.396016 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.396027 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.491092 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.491127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.491135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.491148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.491157 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.501674 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:45Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.505767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.507414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.507486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.507521 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.507553 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.519376 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:45Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.523532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.523566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.523576 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.523591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.523601 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.533599 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:45Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.537141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.537174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.537183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.537202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.537211 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.548490 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:45Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.551519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.551558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.551571 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.551586 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.551596 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.562845 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:45Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:45 crc kubenswrapper[4820]: E0201 14:21:45.563034 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.564326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.564404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.564414 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.564432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.564446 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.666747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.666796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.666808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.666826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.666839 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.772299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.772557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.772643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.772714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.772776 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.875593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.875666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.875687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.875713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.875733 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.979250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.979302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.979317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.979339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:45 crc kubenswrapper[4820]: I0201 14:21:45.979356 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:45Z","lastTransitionTime":"2026-02-01T14:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.081666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.081719 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.081740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.081760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.081775 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.184409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.184435 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.184443 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.184456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.184465 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.198983 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:19:40.420271263 +0000 UTC Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.286807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.286866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.286918 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.286948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.286974 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.390736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.390772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.390783 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.390803 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.390814 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.494180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.494240 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.494260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.494284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.494302 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.597527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.597746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.597834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.597932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.598015 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.700912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.701016 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.701040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.701058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.701070 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.803374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.803439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.803461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.803488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.803509 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.905840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.905902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.905915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.905932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:46 crc kubenswrapper[4820]: I0201 14:21:46.905943 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:46Z","lastTransitionTime":"2026-02-01T14:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.008163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.008219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.008237 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.008261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.008279 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.111203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.111269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.111286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.111311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.111330 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.197977 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.198088 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:47 crc kubenswrapper[4820]: E0201 14:21:47.198155 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.198101 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.198252 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:47 crc kubenswrapper[4820]: E0201 14:21:47.198444 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:47 crc kubenswrapper[4820]: E0201 14:21:47.198620 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:47 crc kubenswrapper[4820]: E0201 14:21:47.198777 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.199088 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:27:07.580724947 +0000 UTC Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.214125 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.214165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.214173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.214186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.214195 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.316928 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.316994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.317027 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.317063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.317088 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.420067 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.420118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.420134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.420157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.420175 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.522512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.522945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.523077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.523308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.523446 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.625779 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.625816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.625826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.625840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.625849 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.728154 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.728431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.728529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.728621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.728711 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.831223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.831276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.831293 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.831316 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.831335 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.934015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.934052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.934062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.934077 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:47 crc kubenswrapper[4820]: I0201 14:21:47.934088 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:47Z","lastTransitionTime":"2026-02-01T14:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.036770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.036808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.036817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.036829 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.036837 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.140510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.140564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.140582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.140605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.140622 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.200119 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:45:01.203710025 +0000 UTC Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.242821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.242857 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.242868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.242900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.242911 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.345979 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.346015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.346023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.346036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.346045 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.447755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.447796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.447810 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.447828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.447840 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.550109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.550149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.550159 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.550173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.550183 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.652731 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.652782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.652800 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.652823 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.652846 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.755213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.755596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.755607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.755624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.755633 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.858187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.858568 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.858774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.859049 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.859206 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.961147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.961184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.961195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.961209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:48 crc kubenswrapper[4820]: I0201 14:21:48.961220 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:48Z","lastTransitionTime":"2026-02-01T14:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.063936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.064006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.064029 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.064056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.064077 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.167400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.167468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.167493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.167522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.167544 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.197999 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.198092 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.198130 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.198026 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:49 crc kubenswrapper[4820]: E0201 14:21:49.198175 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:49 crc kubenswrapper[4820]: E0201 14:21:49.198207 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:49 crc kubenswrapper[4820]: E0201 14:21:49.198258 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:49 crc kubenswrapper[4820]: E0201 14:21:49.198305 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.201633 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:50:01.574701781 +0000 UTC Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.213921 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.226127 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.241705 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.262354 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.280503 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.280551 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.280561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.280575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.280586 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.285526 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.301424 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.321799 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.339291 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.357987 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.371753 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.383399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.383493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.383538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.383560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.383575 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.385111 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.399014 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.420251 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:36Z\\\",\\\"message\\\":\\\"l for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0201 14:21:35.960316 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-r52d9 after 0 failed attempt(s)\\\\nI0201 14:21:35.960319 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-r52d9\\\\nI0201 14:21:35.960249 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960335 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960339 6318 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zbtsv in node crc\\\\nI0201 14:21:35.960344 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv after 0 failed attempt(s)\\\\nI0201 14:21:35.960348 6318 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbt\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.431640 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.445084 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.457284 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:49Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.486240 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.486320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.486344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.486374 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.486397 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.577203 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:49 crc kubenswrapper[4820]: E0201 14:21:49.577412 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:49 crc kubenswrapper[4820]: E0201 14:21:49.577527 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs podName:8befd56b-2ebe-48c7-9027-4f906b2e09d5 nodeName:}" failed. No retries permitted until 2026-02-01 14:22:05.577497684 +0000 UTC m=+67.097864008 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs") pod "network-metrics-daemon-dj7sg" (UID: "8befd56b-2ebe-48c7-9027-4f906b2e09d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.589759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.589842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.589857 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.589920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.589936 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.693522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.693573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.693594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.693616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.693634 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.795914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.795966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.795983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.796004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.796021 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.897648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.897686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.897697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.897713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:49 crc kubenswrapper[4820]: I0201 14:21:49.897723 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:49Z","lastTransitionTime":"2026-02-01T14:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.000329 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.000381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.000396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.000418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.000435 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.102415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.102497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.102509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.102522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.102531 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.201736 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:51:23.548926305 +0000 UTC Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.204250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.204286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.204299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.204326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.204338 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.306245 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.306281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.306290 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.306303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.306314 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.410106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.410147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.410156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.410172 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.410182 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.513654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.513712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.513730 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.513755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.513775 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.617343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.617403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.617419 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.617441 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.617459 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.720529 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.720591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.720630 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.720666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.720688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.824126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.824216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.824242 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.824282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.824310 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.891736 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.892245 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:22:22.892226227 +0000 UTC m=+84.412592521 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.928117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.928162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.928175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.928194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.928208 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:50Z","lastTransitionTime":"2026-02-01T14:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.993493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.993585 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.993626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:50 crc kubenswrapper[4820]: I0201 14:21:50.993666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.993711 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.993781 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.993788 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:22:22.993771936 +0000 UTC m=+84.514138220 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.993990 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.994029 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:22:22.99394854 +0000 UTC m=+84.514314844 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.994047 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.994075 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.994076 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.994159 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.994185 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.994160 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 14:22:22.994133684 +0000 UTC m=+84.514500008 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:50 crc kubenswrapper[4820]: E0201 14:21:50.994355 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 14:22:22.994309108 +0000 UTC m=+84.514675432 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.033140 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.033226 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.033252 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.033287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.033317 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.137570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.137621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.137632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.137651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.137663 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.198670 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.198702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.198702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:51 crc kubenswrapper[4820]: E0201 14:21:51.198846 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.198954 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:51 crc kubenswrapper[4820]: E0201 14:21:51.199015 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:51 crc kubenswrapper[4820]: E0201 14:21:51.199441 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:51 crc kubenswrapper[4820]: E0201 14:21:51.199479 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.199733 4820 scope.go:117] "RemoveContainer" containerID="50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.201818 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:23:52.582688283 +0000 UTC Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.240195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.240239 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.240250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.240269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.240284 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.344128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.344684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.344709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.344751 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.344778 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.448326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.448404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.448423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.448449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.448467 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.551442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.551522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.551534 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.551553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.551566 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.651854 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.655134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.655253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.655322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.655347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.655409 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.666183 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.674014 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.691084 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.713641 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.733300 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.751581 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.758099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.758157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.758168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.758187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.758205 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.769089 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.792189 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.817550 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.841151 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.859660 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.861390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.861456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.861483 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.861513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.861537 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.884986 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.904775 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.930064 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:36Z\\\",\\\"message\\\":\\\"l for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0201 14:21:35.960316 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-r52d9 after 0 failed attempt(s)\\\\nI0201 14:21:35.960319 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-r52d9\\\\nI0201 14:21:35.960249 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960335 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960339 6318 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zbtsv in node crc\\\\nI0201 14:21:35.960344 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv after 0 failed attempt(s)\\\\nI0201 14:21:35.960348 6318 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbt\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.944639 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.964613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.964667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.964685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.964707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.964725 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:51Z","lastTransitionTime":"2026-02-01T14:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.965007 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:51 crc kubenswrapper[4820]: I0201 14:21:51.982125 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:51Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.067167 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.067225 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.067241 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.067264 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.067283 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.170418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.170471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.170488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.170510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.170528 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.202355 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:54:08.386668484 +0000 UTC Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.272687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.272728 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.272745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.272762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.272773 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.376023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.376086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.376108 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.376138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.376161 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.479472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.479528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.479544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.479566 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.479594 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.583151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.583210 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.583230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.583261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.583282 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.685537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.685578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.685588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.685603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.685612 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.788286 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.788315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.788323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.788336 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.788346 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.890886 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.890933 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.890948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.890967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.890980 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.993279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.993305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.993313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.993325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:52 crc kubenswrapper[4820]: I0201 14:21:52.993335 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:52Z","lastTransitionTime":"2026-02-01T14:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.095670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.095705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.095714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.095726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.095735 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.197662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.197686 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.197719 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:53 crc kubenswrapper[4820]: E0201 14:21:53.198141 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.197768 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:53 crc kubenswrapper[4820]: E0201 14:21:53.198206 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.197742 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.198012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: E0201 14:21:53.198327 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.198332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: E0201 14:21:53.198376 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.198383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.198407 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.202638 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:20:35.067702343 +0000 UTC Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.306380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.306431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.306444 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.306462 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.306476 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.408618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.408656 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.408666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.408682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.408693 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.510913 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.510945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.510956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.510978 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.510988 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.521260 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/2.log" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.521775 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/1.log" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.525099 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7" exitCode=1 Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.525149 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.525193 4820 scope.go:117] "RemoveContainer" containerID="50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.526226 4820 scope.go:117] "RemoveContainer" containerID="08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7" Feb 01 14:21:53 crc kubenswrapper[4820]: E0201 14:21:53.526450 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.539237 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.547831 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.562306 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.573995 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.582922 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.597786 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.614944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.615015 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.615035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.614604 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:36Z\\\",\\\"message\\\":\\\"l for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0201 14:21:35.960316 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-r52d9 after 0 failed attempt(s)\\\\nI0201 14:21:35.960319 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-r52d9\\\\nI0201 14:21:35.960249 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960335 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960339 6318 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zbtsv in node crc\\\\nI0201 14:21:35.960344 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv after 0 failed attempt(s)\\\\nI0201 14:21:35.960348 6318 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbt\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.615064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.615293 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.624524 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.634818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.647111 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.663312 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.674302 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.683565 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.696254 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.706222 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.714682 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.718826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.718869 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.718896 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.718913 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.718925 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.725983 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:53Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.820776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.820828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.820842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.820862 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.820890 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.922530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.922562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.922572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.922588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:53 crc kubenswrapper[4820]: I0201 14:21:53.922599 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:53Z","lastTransitionTime":"2026-02-01T14:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.025177 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.025228 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.025237 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.025250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.025258 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.127063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.127124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.127141 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.127163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.127181 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.202942 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:41:02.325271098 +0000 UTC Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.228705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.228743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.228754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.228769 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.228779 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.331243 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.331283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.331295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.331311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.331324 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.433287 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.433345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.433359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.433378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.433392 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.530911 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/2.log" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.534924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.534956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.534964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.534976 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.534986 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.637699 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.637763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.637776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.637793 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.637805 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.740255 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.740326 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.740370 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.740402 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.740424 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.842721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.842753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.842780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.842794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.842803 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.945260 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.945309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.945347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.945368 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:54 crc kubenswrapper[4820]: I0201 14:21:54.945380 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:54Z","lastTransitionTime":"2026-02-01T14:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.047390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.047437 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.047454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.047474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.047490 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.150278 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.150341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.150358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.150386 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.150402 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.198678 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.198757 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.198770 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.198937 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.198987 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.199166 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.199261 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.199367 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.203044 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:52:14.072856884 +0000 UTC Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.253280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.253321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.253332 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.253350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.253362 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.355153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.355189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.355199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.355213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.355222 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.458070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.458114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.458127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.458145 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.458157 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.560799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.560866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.560928 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.560957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.560981 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.581390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.581476 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.581488 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.581508 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.581522 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.593498 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:55Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.597285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.597339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.597373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.597389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.597400 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.610924 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:55Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.614736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.614780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.614795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.614814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.614826 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.628828 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:55Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.633911 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.633948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.633980 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.633993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.634002 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.647657 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:55Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.651439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.651468 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.651477 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.651489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.651498 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.667322 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:55Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:55 crc kubenswrapper[4820]: E0201 14:21:55.667481 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.668612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.668651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.668661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.668676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.668688 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.771410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.771472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.771484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.771499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.771508 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.873495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.873526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.873537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.873552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.873564 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.976305 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.976352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.976360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.976390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:55 crc kubenswrapper[4820]: I0201 14:21:55.976399 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:55Z","lastTransitionTime":"2026-02-01T14:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.079048 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.079120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.079154 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.079182 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.079205 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.182643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.182702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.182716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.182730 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.182740 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.203212 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:54:21.173922428 +0000 UTC Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.284962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.285037 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.285058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.285075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.285086 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.387659 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.387746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.387764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.387787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.387806 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.489697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.489736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.489744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.489757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.489765 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.591594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.591652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.591669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.591693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.591712 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.694267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.694301 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.694310 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.694323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.694332 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.796818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.796908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.796927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.796953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.796973 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.899487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.899539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.899557 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.899580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:56 crc kubenswrapper[4820]: I0201 14:21:56.899597 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:56Z","lastTransitionTime":"2026-02-01T14:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.002112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.002154 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.002166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.002184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.002195 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.104795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.104828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.104837 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.104850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.104859 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.198386 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.198497 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.198531 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.198511 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:57 crc kubenswrapper[4820]: E0201 14:21:57.198617 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:57 crc kubenswrapper[4820]: E0201 14:21:57.198673 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:57 crc kubenswrapper[4820]: E0201 14:21:57.198787 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:57 crc kubenswrapper[4820]: E0201 14:21:57.198946 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.203393 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:33:14.808184157 +0000 UTC Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.206532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.206562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.206570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.206581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.206590 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.309072 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.309155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.309185 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.309217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.309239 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.411740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.411951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.411974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.411998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.412014 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.514696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.514764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.514784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.514814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.514834 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.617487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.617524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.617533 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.617547 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.617556 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.720336 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.720401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.720461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.720489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.720506 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.823515 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.823561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.823572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.823588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.823598 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.927210 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.927259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.927268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.927281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:57 crc kubenswrapper[4820]: I0201 14:21:57.927291 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:57Z","lastTransitionTime":"2026-02-01T14:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.029682 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.029724 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.029733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.029747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.029759 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.132930 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.132977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.132990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.133007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.133019 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.203913 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:18:44.799299412 +0000 UTC Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.235690 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.235733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.235747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.235764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.235777 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.338179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.338218 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.338227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.338244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.338253 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.441072 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.441106 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.441117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.441132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.441145 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.544120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.544183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.544208 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.544238 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.544265 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.646643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.646690 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.646702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.646720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.646730 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.749812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.749892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.749909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.749930 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.749944 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.852553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.852596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.852606 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.852621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.852632 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.955486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.955532 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.955541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.955558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:58 crc kubenswrapper[4820]: I0201 14:21:58.955570 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:58Z","lastTransitionTime":"2026-02-01T14:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.058021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.058068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.058080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.058097 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.058107 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.160043 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.160086 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.160098 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.160115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.160128 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.197755 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.197910 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:21:59 crc kubenswrapper[4820]: E0201 14:21:59.198086 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.198121 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.198256 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:21:59 crc kubenswrapper[4820]: E0201 14:21:59.198309 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:21:59 crc kubenswrapper[4820]: E0201 14:21:59.198427 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:21:59 crc kubenswrapper[4820]: E0201 14:21:59.198584 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.204021 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:55:07.412867493 +0000 UTC Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.211917 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.223369 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.234631 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.244935 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.255820 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.263636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.263675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.263688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.263705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.263718 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.267168 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.278069 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.288289 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.300502 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.311039 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.325749 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.340742 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.352930 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.362916 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.365518 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.365549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.365561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.365577 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.365588 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.373685 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.400802 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e15c6751f48320456010d3ed5a0a63fd22ccc4af5795cbf9abb310bc2ba757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:36Z\\\",\\\"message\\\":\\\"l for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0201 14:21:35.960316 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-r52d9 after 0 failed attempt(s)\\\\nI0201 14:21:35.960319 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0201 14:21:35.960324 6318 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-r52d9\\\\nI0201 14:21:35.960249 6318 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960335 6318 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv\\\\nI0201 14:21:35.960339 6318 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-zbtsv in node crc\\\\nI0201 14:21:35.960344 6318 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbtsv after 0 failed attempt(s)\\\\nI0201 14:21:35.960348 6318 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbt\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.425539 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:21:59Z is after 2025-08-24T17:21:41Z" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.471358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.471470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.471490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.471530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.471546 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.574074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.574110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.574118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.574132 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.574141 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.675754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.676050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.676151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.676268 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.676363 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.779283 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.779604 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.779612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.779626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.779635 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.881840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.881891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.881902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.881915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.881925 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.983957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.984245 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.984323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.984401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:21:59 crc kubenswrapper[4820]: I0201 14:21:59.984468 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:21:59Z","lastTransitionTime":"2026-02-01T14:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.086190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.086227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.086235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.086251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.086260 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.189527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.189582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.189600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.189624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.189642 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.204828 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:07:04.737629977 +0000 UTC Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.292531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.292585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.292593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.292607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.292617 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.396157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.396220 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.396233 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.396252 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.396265 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.499457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.499537 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.499622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.499661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.499686 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.602344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.602395 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.602406 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.602424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.602437 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.720705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.720763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.720776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.720792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.720803 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.823763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.823790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.823799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.823811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.823819 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.926714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.926748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.926777 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.926790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:00 crc kubenswrapper[4820]: I0201 14:22:00.926800 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:00Z","lastTransitionTime":"2026-02-01T14:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.028504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.028580 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.028596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.028617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.028631 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.130636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.130671 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.130679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.130692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.130700 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.198122 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.198208 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:01 crc kubenswrapper[4820]: E0201 14:22:01.198236 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.198288 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:01 crc kubenswrapper[4820]: E0201 14:22:01.198384 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.198447 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:01 crc kubenswrapper[4820]: E0201 14:22:01.198579 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:01 crc kubenswrapper[4820]: E0201 14:22:01.198654 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.206048 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:23:58.225029989 +0000 UTC Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.233036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.233082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.233099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.233120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.233133 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.335865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.335957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.335977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.336001 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.336019 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.438565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.438600 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.438611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.438626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.438639 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.541718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.541778 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.541796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.541820 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.541838 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.644200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.644246 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.644256 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.644272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.644283 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.747408 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.747450 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.747459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.747475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.747486 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.851058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.851103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.851117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.851135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.851145 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.954180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.954247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.954257 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.954272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:01 crc kubenswrapper[4820]: I0201 14:22:01.954281 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:01Z","lastTransitionTime":"2026-02-01T14:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.056717 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.056762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.056774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.056788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.056797 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.159851 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.159956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.159973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.159998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.160015 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.206120 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:11:04.303096385 +0000 UTC Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.263383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.263438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.263460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.263487 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.263507 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.366244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.366311 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.366330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.366353 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.366372 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.469109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.469162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.469175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.469195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.469208 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.571811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.571845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.571853 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.571865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.571888 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.674207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.674242 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.674253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.674265 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.674274 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.776486 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.776536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.776553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.776578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.776596 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.878985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.879034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.879046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.879064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.879077 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.981934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.981972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.981986 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.982002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:02 crc kubenswrapper[4820]: I0201 14:22:02.982014 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:02Z","lastTransitionTime":"2026-02-01T14:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.084428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.084498 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.084520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.084550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.084572 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.186496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.186530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.186539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.186552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.186561 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.197716 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.197739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.197739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.197768 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:03 crc kubenswrapper[4820]: E0201 14:22:03.197824 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:03 crc kubenswrapper[4820]: E0201 14:22:03.198113 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:03 crc kubenswrapper[4820]: E0201 14:22:03.198233 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:03 crc kubenswrapper[4820]: E0201 14:22:03.198291 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.206932 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:14:55.140131246 +0000 UTC Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.289210 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.289266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.289282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.289306 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.289323 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.390998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.391033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.391044 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.391061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.391073 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.492715 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.492755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.492764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.492776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.492785 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.595074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.595117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.595128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.595144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.595159 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.697956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.698020 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.698036 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.698061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.698077 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.800109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.800153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.800162 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.800177 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.800187 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.902346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.902390 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.902403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.902420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:03 crc kubenswrapper[4820]: I0201 14:22:03.902430 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:03Z","lastTransitionTime":"2026-02-01T14:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.005345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.005403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.005416 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.005434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.005451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.107888 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.107920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.107928 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.107941 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.107950 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.207514 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:24:00.971034486 +0000 UTC Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.210000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.210023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.210033 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.210047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.210057 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.311909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.311940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.311950 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.311964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.311974 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.415736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.416222 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.416253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.416278 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.416354 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.518955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.518983 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.518993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.519006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.519015 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.621657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.621709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.621721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.621736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.621745 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.727389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.727641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.727658 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.727680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.727698 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.830340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.830543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.830552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.830567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.830577 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.932949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.932989 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.933000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.933013 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:04 crc kubenswrapper[4820]: I0201 14:22:04.933022 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:04Z","lastTransitionTime":"2026-02-01T14:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.034725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.034759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.034767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.034780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.034789 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.137549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.137611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.137628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.137653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.137672 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.198319 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.198431 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.198474 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.198608 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.198604 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.198787 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.198903 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.198988 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.208503 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:33:49.566507443 +0000 UTC Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.240304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.240345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.240357 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.240372 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.240383 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.342358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.342397 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.342405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.342422 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.342431 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.444110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.444145 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.444153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.444166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.444176 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.546839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.546899 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.546910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.546923 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.546933 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.638013 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.638130 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.638183 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs podName:8befd56b-2ebe-48c7-9027-4f906b2e09d5 nodeName:}" failed. No retries permitted until 2026-02-01 14:22:37.638166031 +0000 UTC m=+99.158532315 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs") pod "network-metrics-daemon-dj7sg" (UID: "8befd56b-2ebe-48c7-9027-4f906b2e09d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.648617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.648655 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.648665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.648679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.648689 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.750974 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.751002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.751011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.751023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.751031 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.773977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.774022 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.774032 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.774046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.774056 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.790776 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:05Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.793922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.793959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.793969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.793984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.793992 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.806733 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:05Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.809594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.809644 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.809654 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.809667 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.809678 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.821118 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:05Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.824424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.824452 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.824461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.824474 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.824483 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.836849 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:05Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.842070 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.842112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.842126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.842144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.842155 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.854352 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:05Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:05 crc kubenswrapper[4820]: E0201 14:22:05.854460 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.855506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.855545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.855556 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.855568 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.855576 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.957436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.957502 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.957514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.957530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:05 crc kubenswrapper[4820]: I0201 14:22:05.957541 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:05Z","lastTransitionTime":"2026-02-01T14:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.059519 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.059594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.059612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.059628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.059665 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.161623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.161669 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.161680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.161753 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.161771 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.209392 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:48:02.698664648 +0000 UTC Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.264651 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.264707 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.264718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.264767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.264780 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.324015 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.324819 4820 scope.go:117] "RemoveContainer" containerID="08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7" Feb 01 14:22:06 crc kubenswrapper[4820]: E0201 14:22:06.325120 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.337810 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.349629 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.361198 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.367109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.367134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.367144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.367157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.367167 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.373707 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.387546 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.401581 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.411087 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.424609 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.434346 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.442968 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.459773 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.468625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.468657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.468666 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.468679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.468689 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.469702 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.479811 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.489457 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.499451 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.508928 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.520763 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:06Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.570623 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.570680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.570702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.570741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.570760 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.673475 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.673510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.673527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.673548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.673566 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.776096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.776127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.776136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.776150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.776162 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.877787 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.877813 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.877821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.877835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.877844 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.979868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.979911 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.979920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.979932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:06 crc kubenswrapper[4820]: I0201 14:22:06.979941 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:06Z","lastTransitionTime":"2026-02-01T14:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.081910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.081961 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.081973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.081990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.082003 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.184263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.184313 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.184328 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.184350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.184361 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.198612 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.198641 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.198679 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.198646 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:07 crc kubenswrapper[4820]: E0201 14:22:07.198783 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:07 crc kubenswrapper[4820]: E0201 14:22:07.198974 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:07 crc kubenswrapper[4820]: E0201 14:22:07.199082 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:07 crc kubenswrapper[4820]: E0201 14:22:07.199165 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.210144 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:26:53.165146498 +0000 UTC Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.286279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.286315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.286325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.286341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.286350 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.388490 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.388560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.388578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.388602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.388624 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.490896 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.490935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.490947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.490964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.490976 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.593151 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.593217 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.593240 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.593269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.593292 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.695954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.696026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.696050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.696078 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.696144 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.798264 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.798298 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.798307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.798320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.798331 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.900565 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.900614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.900625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.900638 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:07 crc kubenswrapper[4820]: I0201 14:22:07.900647 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:07Z","lastTransitionTime":"2026-02-01T14:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.002359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.002413 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.002433 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.002455 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.002473 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.105076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.105118 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.105130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.105148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.105160 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.207729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.207784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.207795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.207826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.207841 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.211152 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:42:47.83882062 +0000 UTC Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.310449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.310494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.310504 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.310538 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.310548 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.412665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.412703 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.412714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.412729 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.412740 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.515219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.515263 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.515276 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.515293 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.515305 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.617622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.617688 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.617713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.617746 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.617768 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.720369 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.720424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.720436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.720453 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.720464 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.822691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.822760 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.822773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.822814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.822828 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.924903 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.924946 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.924956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.924969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:08 crc kubenswrapper[4820]: I0201 14:22:08.924980 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:08Z","lastTransitionTime":"2026-02-01T14:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.027540 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.027578 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.027588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.027603 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.027615 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.129752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.129834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.129852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.129915 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.129934 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.198617 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.198665 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.198683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:09 crc kubenswrapper[4820]: E0201 14:22:09.198769 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.198834 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:09 crc kubenswrapper[4820]: E0201 14:22:09.198952 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:09 crc kubenswrapper[4820]: E0201 14:22:09.199056 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:09 crc kubenswrapper[4820]: E0201 14:22:09.199149 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.211071 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.211306 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:38:34.713587203 +0000 UTC Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.223429 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.232383 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.232424 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.232434 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.232449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.232459 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.235114 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.246997 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.265201 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.280527 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.297665 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.312951 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.327526 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.334134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.334165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.334175 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.334190 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.334202 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.340156 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.352712 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.362333 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.371969 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.384803 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.396648 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.406836 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.418853 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:09Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.436609 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.436665 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.436678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.436696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.436708 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.539281 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.539325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.539336 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.539352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.539363 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.641887 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.641929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.641940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.641954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.641962 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.743579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.743611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.743620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.743634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.743643 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.845470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.845507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.845522 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.845539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.845549 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.947934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.947997 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.948010 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.948026 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:09 crc kubenswrapper[4820]: I0201 14:22:09.948040 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:09Z","lastTransitionTime":"2026-02-01T14:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.049944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.049982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.049994 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.050011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.050022 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.152324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.152380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.152392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.152412 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.152426 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.212033 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:20:21.541218052 +0000 UTC Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.254271 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.254323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.254335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.254352 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.254363 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.356447 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.356489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.356499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.356514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.356525 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.458454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.458493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.458506 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.458521 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.458533 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.561135 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.561168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.561179 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.561193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.561203 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.587411 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/0.log" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.587479 4820 generic.go:334] "Generic (PLEG): container finished" podID="20f8fae3-1755-461a-8748-a0033423ad5a" containerID="48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7" exitCode=1 Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.587518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q922s" event={"ID":"20f8fae3-1755-461a-8748-a0033423ad5a","Type":"ContainerDied","Data":"48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.588712 4820 scope.go:117] "RemoveContainer" containerID="48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.604411 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.622441 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.638281 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.649515 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.661174 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.662804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.662848 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.662865 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.662919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.662934 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.673815 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:22:09Z\\\",\\\"message\\\":\\\"2026-02-01T14:21:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750\\\\n2026-02-01T14:21:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750 to /host/opt/cni/bin/\\\\n2026-02-01T14:21:24Z [verbose] multus-daemon started\\\\n2026-02-01T14:21:24Z [verbose] Readiness Indicator file check\\\\n2026-02-01T14:22:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.686461 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.698823 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.712004 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.722269 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.735044 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.744817 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.759650 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.765307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.765335 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.765343 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.765357 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.765366 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.776310 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.793681 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.806334 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.819536 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:10Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.867550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.867594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.867602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.867618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.867627 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.970064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.970112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.970121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.970136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:10 crc kubenswrapper[4820]: I0201 14:22:10.970145 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:10Z","lastTransitionTime":"2026-02-01T14:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.071767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.071796 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.071805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.071817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.071826 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.173417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.173458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.173467 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.173481 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.173489 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.197941 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.197975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.198147 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:11 crc kubenswrapper[4820]: E0201 14:22:11.198245 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:11 crc kubenswrapper[4820]: E0201 14:22:11.198323 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:11 crc kubenswrapper[4820]: E0201 14:22:11.198410 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.198484 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:11 crc kubenswrapper[4820]: E0201 14:22:11.198549 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.207509 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.212956 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:38:15.468529028 +0000 UTC Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.275907 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.275948 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.275956 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.275969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.275979 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.378097 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.378136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.378144 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.378158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.378167 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.480628 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.480695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.480708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.480725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.480737 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.582702 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.582743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.582755 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.582770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.582782 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.591703 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/0.log" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.591798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q922s" event={"ID":"20f8fae3-1755-461a-8748-a0033423ad5a","Type":"ContainerStarted","Data":"0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.606231 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.617084 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.627377 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.643976 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.655198 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.667666 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.678758 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:22:09Z\\\",\\\"message\\\":\\\"2026-02-01T14:21:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750\\\\n2026-02-01T14:21:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750 to /host/opt/cni/bin/\\\\n2026-02-01T14:21:24Z [verbose] multus-daemon started\\\\n2026-02-01T14:21:24Z [verbose] Readiness Indicator file check\\\\n2026-02-01T14:22:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.684970 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.685034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.685054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.685078 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.685103 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.698421 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.709170 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66de5dd2-63ee-437a-bfd0-85d8e2015d71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f676e6b13a32237c36128907682976a34657cec50f75a143a51a82ee71b1dcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.724017 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.737245 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.750581 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.764811 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.776938 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.787261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.787298 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.787309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.787325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.787336 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.795858 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.808605 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.821889 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.834271 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:11Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.889605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.889648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.889660 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.889675 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.889687 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.991381 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.991413 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.991421 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.991435 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:11 crc kubenswrapper[4820]: I0201 14:22:11.991445 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:11Z","lastTransitionTime":"2026-02-01T14:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.093949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.094000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.094016 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.094040 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.094057 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.196513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.196564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.196587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.196616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.196638 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.213636 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:47:49.6216745 +0000 UTC Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.299152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.299247 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.299272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.299300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.299321 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.401691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.401742 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.401754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.401773 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.401787 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.504440 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.504483 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.504493 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.504512 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.504524 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.606845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.606908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.606922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.606938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.606949 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.709288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.709322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.709331 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.709344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.709355 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.811772 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.811822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.811836 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.811855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.811868 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.913396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.913427 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.913439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.913454 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:12 crc kubenswrapper[4820]: I0201 14:22:12.913465 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:12Z","lastTransitionTime":"2026-02-01T14:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.015114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.015145 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.015152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.015165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.015174 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.117232 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.117302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.117315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.117354 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.117369 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.198078 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.198096 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:13 crc kubenswrapper[4820]: E0201 14:22:13.198215 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.198252 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:13 crc kubenswrapper[4820]: E0201 14:22:13.198369 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:13 crc kubenswrapper[4820]: E0201 14:22:13.198440 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.198162 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:13 crc kubenswrapper[4820]: E0201 14:22:13.198768 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.213919 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:43:32.775708058 +0000 UTC Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.219382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.219413 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.219423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.219436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.219447 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.322126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.322176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.322186 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.322199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.322209 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.425235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.425289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.425302 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.425322 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.425337 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.529634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.529727 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.529747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.529780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.529809 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.632647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.632695 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.632705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.632720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.632730 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.735423 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.735459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.735470 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.735484 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.735494 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.838549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.838588 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.838607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.838620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.838629 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.941345 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.941392 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.941409 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.941430 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:13 crc kubenswrapper[4820]: I0201 14:22:13.941447 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:13Z","lastTransitionTime":"2026-02-01T14:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.043817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.043866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.043917 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.043947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.043971 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.145713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.145741 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.145748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.145761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.145774 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.214037 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:18:25.254282707 +0000 UTC Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.248496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.248552 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.248573 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.248596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.248614 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.352126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.352197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.352215 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.352242 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.352260 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.455579 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.455632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.455650 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.455673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.455692 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.557936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.557975 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.557985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.557999 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.558008 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.662674 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.662747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.662774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.662806 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.662828 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.768316 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.768377 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.768401 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.768431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.768451 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.871094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.871125 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.871134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.871147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.871156 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.974081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.974136 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.974156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.974180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:14 crc kubenswrapper[4820]: I0201 14:22:14.974197 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:14Z","lastTransitionTime":"2026-02-01T14:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.076308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.076357 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.076376 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.076400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.076418 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.179065 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.179108 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.179116 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.179130 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.179140 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.198094 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.198235 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.198407 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.198503 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.198609 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.198653 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.198826 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.198914 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.214770 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:14:22.418040301 +0000 UTC Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.282137 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.282174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.282184 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.282203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.282219 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.386024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.386075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.386087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.386105 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.386116 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.489285 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.489330 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.489341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.489359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.489370 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.591548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.591596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.591607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.591626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.591638 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.693968 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.694001 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.694008 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.694021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.694030 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.797253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.797327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.797347 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.797373 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.797408 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.883608 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.883683 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.883701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.883949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.883968 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.900429 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:15Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.905612 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.905641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.905652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.905668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.905678 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.918598 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:15Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.921967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.922024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.922047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.922076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.922119 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.940696 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:15Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.945104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.945280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.945400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.945499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.945584 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.962315 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:15Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.965056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.965110 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.965128 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.965152 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.965169 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.980429 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:15Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:15 crc kubenswrapper[4820]: E0201 14:22:15.980922 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.982530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.982576 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.982593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.982615 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:15 crc kubenswrapper[4820]: I0201 14:22:15.982628 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:15Z","lastTransitionTime":"2026-02-01T14:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.085740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.085776 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.085785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.085799 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.085810 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.188684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.188714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.188722 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.188736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.188747 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.215166 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:58:33.047976201 +0000 UTC Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.291442 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.291485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.291497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.291514 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.291525 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.394279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.394315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.394327 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.394341 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.394350 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.496852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.496902 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.496914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.496929 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.496937 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.599178 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.599203 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.599211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.599224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.599233 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.701457 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.701489 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.701497 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.701509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.701518 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.803636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.803670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.803678 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.803692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.803701 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.906005 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.906050 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.906067 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.906084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:16 crc kubenswrapper[4820]: I0201 14:22:16.906099 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:16Z","lastTransitionTime":"2026-02-01T14:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.008758 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.008797 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.008806 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.008819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.008829 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.110611 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.110653 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.110664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.110680 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.110691 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.198648 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.198689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.198645 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.198751 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:17 crc kubenswrapper[4820]: E0201 14:22:17.198950 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:17 crc kubenswrapper[4820]: E0201 14:22:17.199068 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:17 crc kubenswrapper[4820]: E0201 14:22:17.199112 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:17 crc kubenswrapper[4820]: E0201 14:22:17.199577 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.212103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.212166 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.212183 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.212209 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.212226 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.216224 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:49:56.471426998 +0000 UTC Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.314672 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.314713 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.314725 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.314740 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.314753 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.418097 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.418153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.418171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.418194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.418212 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.520804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.520837 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.520845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.520858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.520867 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.623859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.623906 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.623919 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.623934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.623945 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.726577 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.726614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.726625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.726640 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.726648 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.828585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.828645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.828661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.828684 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.828702 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.930977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.931031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.931047 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.931066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:17 crc kubenswrapper[4820]: I0201 14:22:17.931078 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:17Z","lastTransitionTime":"2026-02-01T14:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.034028 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.034075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.034094 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.034120 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.034136 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.138262 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.138317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.138331 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.138348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.138360 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.217313 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:43:21.079495716 +0000 UTC Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.241109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.241168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.241195 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.241223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.241245 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.344138 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.344206 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.344223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.344246 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.344264 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.446564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.446593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.446602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.446618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.446628 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.549590 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.549657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.549676 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.549701 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.549721 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.653012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.653083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.653104 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.653127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.653146 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.755496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.755535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.755545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.755560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.755570 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.857664 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.857698 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.857709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.857723 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.857734 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.959509 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.959541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.959549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.959561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:18 crc kubenswrapper[4820]: I0201 14:22:18.959569 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:18Z","lastTransitionTime":"2026-02-01T14:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.061744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.061798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.061809 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.061823 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.061833 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.164495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.164561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.164574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.164593 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.164607 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.198247 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:19 crc kubenswrapper[4820]: E0201 14:22:19.198433 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.198475 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.198475 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.198588 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:19 crc kubenswrapper[4820]: E0201 14:22:19.198699 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.199438 4820 scope.go:117] "RemoveContainer" containerID="08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7" Feb 01 14:22:19 crc kubenswrapper[4820]: E0201 14:22:19.199787 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:19 crc kubenswrapper[4820]: E0201 14:22:19.200012 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.213931 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.217488 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:09:02.80742502 +0000 UTC Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.227155 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.241934 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.252679 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.263972 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.267812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.267847 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.267860 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.267901 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.267919 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.277711 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.287078 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.298843 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:22:09Z\\\",\\\"message\\\":\\\"2026-02-01T14:21:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750\\\\n2026-02-01T14:21:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750 to /host/opt/cni/bin/\\\\n2026-02-01T14:21:24Z [verbose] multus-daemon started\\\\n2026-02-01T14:21:24Z [verbose] Readiness Indicator file check\\\\n2026-02-01T14:22:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.315621 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.330743 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.344352 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.358347 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.370976 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.371888 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.372021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.372134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.372223 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.372297 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.380284 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.393027 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.402407 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66de5dd2-63ee-437a-bfd0-85d8e2015d71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f676e6b13a32237c36128907682976a34657cec50f75a143a51a82ee71b1dcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.412793 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.430685 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.474267 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.474295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.474308 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.474324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.474335 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.576788 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.576814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.576822 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.576834 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.576843 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.617832 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/2.log" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.620334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.620789 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.633775 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:22:09Z\\\",\\\"message\\\":\\\"2026-02-01T14:21:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750\\\\n2026-02-01T14:21:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750 to /host/opt/cni/bin/\\\\n2026-02-01T14:21:24Z [verbose] multus-daemon started\\\\n2026-02-01T14:21:24Z [verbose] Readiness Indicator file check\\\\n2026-02-01T14:22:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.646139 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.655255 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.663903 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.674471 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.678855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.678969 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.679034 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.679062 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.679121 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.690420 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.704395 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.713960 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.728945 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.740814 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66de5dd2-63ee-437a-bfd0-85d8e2015d71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f676e6b13a32237c36128907682976a34657cec50f75a143a51a82ee71b1dcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.754514 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.769773 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.779140 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.781549 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.781596 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.781610 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.781626 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.781636 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.791643 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.802059 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.815232 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.826588 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.837706 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:19Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.884531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.884570 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.884583 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.884597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.884609 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.987002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.987046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.987054 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.987068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:19 crc kubenswrapper[4820]: I0201 14:22:19.987076 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:19Z","lastTransitionTime":"2026-02-01T14:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.089072 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.089113 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.089123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.089139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.089150 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.191251 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.191284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.191292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.191304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.191314 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.218069 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:13:01.024089149 +0000 UTC Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.293404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.293438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.293447 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.293459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.293468 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.395893 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.395934 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.395945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.395960 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.395972 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.498382 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.498420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.498432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.498449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.498461 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.600197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.600244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.600253 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.600266 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.600275 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.702224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.702270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.702280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.702295 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.702303 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.804543 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.804591 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.804605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.804622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.804633 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.906621 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.906670 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.906686 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.906708 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:20 crc kubenswrapper[4820]: I0201 14:22:20.906723 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:20Z","lastTransitionTime":"2026-02-01T14:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.009219 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.009288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.009300 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.009317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.009330 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.111550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.111620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.111631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.111648 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.111660 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.198214 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.198255 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.198280 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:21 crc kubenswrapper[4820]: E0201 14:22:21.198334 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.198346 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:21 crc kubenswrapper[4820]: E0201 14:22:21.198437 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:21 crc kubenswrapper[4820]: E0201 14:22:21.198472 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:21 crc kubenswrapper[4820]: E0201 14:22:21.198524 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.213737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.213780 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.213792 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.213812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.213823 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.219096 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:05:06.357945915 +0000 UTC Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.317647 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.317687 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.317696 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.317710 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.317720 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.420380 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.420439 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.420451 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.420472 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.420485 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.522691 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.522790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.522814 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.522831 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.522847 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.625031 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.625058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.625066 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.625078 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.625087 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.626667 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/3.log" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.627293 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/2.log" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.630211 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331" exitCode=1 Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.630284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.630339 4820 scope.go:117] "RemoveContainer" containerID="08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.631462 4820 scope.go:117] "RemoveContainer" containerID="8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331" Feb 01 14:22:21 crc kubenswrapper[4820]: E0201 14:22:21.631725 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.649333 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.659630 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.672485 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.684962 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.695641 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.706277 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.717861 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:22:09Z\\\",\\\"message\\\":\\\"2026-02-01T14:21:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750\\\\n2026-02-01T14:21:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750 to /host/opt/cni/bin/\\\\n2026-02-01T14:21:24Z [verbose] multus-daemon started\\\\n2026-02-01T14:21:24Z [verbose] Readiness Indicator file check\\\\n2026-02-01T14:22:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.727456 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.727500 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.727513 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.727531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.727545 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.730579 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.744745 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.754226 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.773070 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.784054 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66de5dd2-63ee-437a-bfd0-85d8e2015d71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f676e6b13a32237c36128907682976a34657cec50f75a143a51a82ee71b1dcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.796864 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.807277 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.828943 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:22:20Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:22:20.545621 6926 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0201 14:22:20.545644 6926 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0201 14:22:20.545664 6926 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0201 14:22:20.545723 6926 factory.go:1336] Added *v1.Node event handler 7\\\\nI0201 14:22:20.545750 6926 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0201 14:22:20.546031 6926 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0201 14:22:20.546100 6926 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0201 14:22:20.546126 6926 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:22:20.546146 6926 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0201 14:22:20.546204 6926 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.830123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.830153 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.830163 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.830176 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.830185 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.840480 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.853571 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.867611 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:21Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.932779 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.932936 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.932951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.932967 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:21 crc kubenswrapper[4820]: I0201 14:22:21.932982 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:21Z","lastTransitionTime":"2026-02-01T14:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.035407 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.035473 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.035495 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.035525 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.035557 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.138841 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.138918 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.138937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.138959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.138976 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.219202 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:41:21.187735764 +0000 UTC Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.241652 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.241720 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.241739 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.241763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.241779 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.344920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.344993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.345012 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.345042 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.345062 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.448212 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.448259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.448270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.448288 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.448301 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.550350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.550405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.550418 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.550436 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.550449 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.653245 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.653325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.653339 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.653358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.653378 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.755924 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.756282 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.756298 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.756317 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.756329 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.858703 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.858749 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.858762 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.858782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.858794 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.922405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:22:22 crc kubenswrapper[4820]: E0201 14:22:22.922616 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.922580447 +0000 UTC m=+148.442946741 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.961417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.961507 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.961526 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.961548 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:22 crc kubenswrapper[4820]: I0201 14:22:22.961562 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:22Z","lastTransitionTime":"2026-02-01T14:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.023384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.023433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.023463 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.023507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023547 4820 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023579 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023598 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023608 4820 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023625 4820 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023657 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023693 4820 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023633 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.023617429 +0000 UTC m=+148.543983713 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023704 4820 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023734 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.023711121 +0000 UTC m=+148.544077485 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023757 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.023746092 +0000 UTC m=+148.544112486 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.023783 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.023769922 +0000 UTC m=+148.544136346 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.063937 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.063972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.063981 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.063996 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.064005 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.166365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.166396 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.166405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.166420 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.166432 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.197965 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.198065 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.198085 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.197976 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.198202 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.197965 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.198318 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:23 crc kubenswrapper[4820]: E0201 14:22:23.198371 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.220033 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:48:54.243134472 +0000 UTC Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.268951 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.268990 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.269196 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.269213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.269223 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.371964 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.372002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.372011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.372025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.372034 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.474169 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.474205 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.474214 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.474227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.474237 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.576359 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.576403 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.576415 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.576429 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.576438 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.637387 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/3.log" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.678280 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.678324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.678337 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.678355 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.678366 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.780461 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.780511 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.780523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.780542 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.780555 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.883329 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.883371 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.883384 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.883399 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.883411 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.986158 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.986245 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.986270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.986299 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:23 crc kubenswrapper[4820]: I0201 14:22:23.986328 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:23Z","lastTransitionTime":"2026-02-01T14:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.089081 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.089147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.089167 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.089189 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.089205 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.192425 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.192617 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.192634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.192661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.192726 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.221144 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:39:02.198584069 +0000 UTC Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.295818 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.295909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.295930 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.296140 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.296154 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.399770 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.399816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.399839 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.399859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.399901 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.501826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.501908 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.501921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.501939 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.501951 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.605693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.605726 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.605735 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.605747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.605756 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.708629 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.708692 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.708714 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.708732 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.708747 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.811754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.811821 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.811840 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.811864 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.811909 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.915109 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.915188 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.915213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.915244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:24 crc kubenswrapper[4820]: I0201 14:22:24.915271 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:24Z","lastTransitionTime":"2026-02-01T14:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.018194 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.018270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.018294 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.018323 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.018346 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.122079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.122117 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.122126 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.122140 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.122149 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.197867 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.197960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.197931 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:25 crc kubenswrapper[4820]: E0201 14:22:25.198124 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.198152 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:25 crc kubenswrapper[4820]: E0201 14:22:25.198263 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:25 crc kubenswrapper[4820]: E0201 14:22:25.198462 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:25 crc kubenswrapper[4820]: E0201 14:22:25.198532 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.222055 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:39:21.062669928 +0000 UTC Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.225344 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.225410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.225428 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.225451 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.225469 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.327756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.327798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.327811 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.327828 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.327840 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.431700 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.431782 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.431798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.431817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.431835 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.536527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.536609 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.536624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.536642 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.536652 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.638925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.638984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.639001 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.639023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.639041 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.741523 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.741602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.741616 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.741632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.741645 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.843938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.843965 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.843973 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.843985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.843994 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.946309 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.946340 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.946350 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.946365 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:25 crc kubenswrapper[4820]: I0201 14:22:25.946374 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:25Z","lastTransitionTime":"2026-02-01T14:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.048480 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.048545 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.048563 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.048587 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.048606 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.151706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.151752 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.151764 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.151779 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.151791 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.204845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.204959 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.204985 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.205014 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.205036 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.222152 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:01:39.552121341 +0000 UTC Feb 01 14:22:26 crc kubenswrapper[4820]: E0201 14:22:26.222159 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.227269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.227303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.227315 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.227334 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.227347 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: E0201 14:22:26.246240 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.250496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.250535 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.250544 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.250560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.250570 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: E0201 14:22:26.269265 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.272953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.273004 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.273021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.273045 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.273065 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: E0201 14:22:26.287338 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.291572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.291607 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.291620 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.291636 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.291649 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: E0201 14:22:26.306039 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b8e7bd4f-d2dd-4ff5-b41a-2605330b4088\\\",\\\"systemUUID\\\":\\\"e802f971-4889-4bb7-b640-2de29e2c4a97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:26Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:26 crc kubenswrapper[4820]: E0201 14:22:26.306154 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.307709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.307736 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.307744 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.307757 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.307766 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.410024 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.410058 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.410068 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.410080 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.410088 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.512594 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.512635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.512643 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.512662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.512673 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.615459 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.615520 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.615531 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.615546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.615557 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.717575 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.717614 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.717624 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.717639 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.717649 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.819510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.819546 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.819555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.819569 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.819579 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.922561 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.922597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.922605 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.922622 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:26 crc kubenswrapper[4820]: I0201 14:22:26.922631 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:26Z","lastTransitionTime":"2026-02-01T14:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.025801 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.025866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.025921 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.025953 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.025977 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.128999 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.129076 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.129097 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.129127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.129152 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.198429 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.198510 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.198522 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.198642 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:27 crc kubenswrapper[4820]: E0201 14:22:27.198634 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:27 crc kubenswrapper[4820]: E0201 14:22:27.198821 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:27 crc kubenswrapper[4820]: E0201 14:22:27.198946 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:27 crc kubenswrapper[4820]: E0201 14:22:27.199029 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.216084 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.222644 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:19:48.873370347 +0000 UTC Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.231562 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.231613 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.231625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.231641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.231653 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.334180 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.334224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.334235 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.334250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.334259 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.436962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.436998 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.437006 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.437021 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.437030 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.539885 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.539925 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.539935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.539949 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.539966 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.642259 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.642291 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.642460 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.642476 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.642485 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.744808 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.744838 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.744846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.744859 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.744867 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.847909 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.847944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.847952 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.847966 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.847974 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.951041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.951087 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.951096 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.951111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:27 crc kubenswrapper[4820]: I0201 14:22:27.951120 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:27Z","lastTransitionTime":"2026-02-01T14:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.053261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.053292 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.053303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.053314 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.053323 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.155837 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.155910 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.155922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.155977 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.155990 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.222825 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:52:25.560696975 +0000 UTC Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.258763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.258832 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.258855 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.258938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.258967 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.361912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.361938 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.361945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.361958 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.361966 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.464632 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.464852 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.464868 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.465063 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.465077 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.567947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.567991 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.568002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.568023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.568035 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.669662 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.669697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.669705 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.669718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.669727 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.772530 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.772564 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.772572 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.772585 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.772596 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.874524 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.874597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.874609 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.874625 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.874636 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.977272 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.977304 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.977312 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.977325 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:28 crc kubenswrapper[4820]: I0201 14:22:28.977333 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:28Z","lastTransitionTime":"2026-02-01T14:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.079355 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.079405 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.079417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.079431 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.079442 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.181712 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.181745 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.181754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.181767 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.181775 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.198719 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.198748 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:29 crc kubenswrapper[4820]: E0201 14:22:29.198836 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.198855 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.198890 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:29 crc kubenswrapper[4820]: E0201 14:22:29.198941 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:29 crc kubenswrapper[4820]: E0201 14:22:29.199032 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:29 crc kubenswrapper[4820]: E0201 14:22:29.199261 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.210908 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3c859a3cbf69827e459523a73b5a00e11b09e070072a640e111040ed959bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.220320 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6981b240b9c7c3352d00d5d708ecd189215ec818db1baf8a41b2cb3dc89eece7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.222994 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:16:38.279971551 +0000 UTC Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.229702 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"060a9e0b-803f-4ccc-bed6-92614d449527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5261cab8e45eff217fce1ae85a783a8e67f362ec2e95c25c4b748d20f9dc8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qlddw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w8vbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.242422 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q922s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f8fae3-1755-461a-8748-a0033423ad5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:22:09Z\\\",\\\"message\\\":\\\"2026-02-01T14:21:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750\\\\n2026-02-01T14:21:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_42fa77c1-ef3c-4824-9163-b3a470b22750 to /host/opt/cni/bin/\\\\n2026-02-01T14:21:24Z [verbose] multus-daemon started\\\\n2026-02-01T14:21:24Z [verbose] Readiness Indicator file check\\\\n2026-02-01T14:22:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q922s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.256373 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e228d5b6-4ae4-4c56-b52d-d895d1e4ab67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b92ec7eb262908b4723fb55382b9658604ded30de9c7a9ca163cb8f2140f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d70b5d9e70f519681d58bdcbed805704ef1619c6f21f8cb62fe5c0dba42c30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cccc86310b58ca55b3de5d779a3e32c9725af4876eef7f16a14ac309d4dda5b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da8ede336e19b60611463ac19ca2bba275a7a26b4278b42ca41dbb08bea733f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0b108348829ebd1d6ab7b7426174015485c5ebccec3432e468be55d484cd18c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684c677a1571891e16f9433a9bd4db017ef958fd9827414fb3a34eec41df0c46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aab7ab0c63da506018ecad77bb37b05547b6ce5e228949c3db052257b1faac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbvl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbtsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.267030 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66de5dd2-63ee-437a-bfd0-85d8e2015d71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f676e6b13a32237c36128907682976a34657cec50f75a143a51a82ee71b1dcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2b45242751c6814ff86454a22b2269910501dc8e99910a0d42a43333206e408\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.283947 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.283982 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.283993 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.284009 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.284020 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.284400 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afe4f85-bc91-45bb-8118-69b642329599\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2cce4d029599b5c9a47abf8f2db0af9adf904c6fd69bd899b6c324603dae407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1de1b60eca623b40c4adf3f2577dc4f5e0b046899016d557b7601eff47c2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f771cd0c8c06eddf1c69c24299aabf7705bca5971e50e6d710ec70c67cc9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cccd82e3c896432eadd8aa4e3682175b4718bcc25833ebcfc5290f6d253b6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131f417c4046669c093d889bf48b10b8708fa0e3db7ba8624168ade0273a1098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8eb52f4a00a01d1201a3233d74bf0007a46979f5c5a5c7177d368fe5f77eb37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8eb52f4a00a01d1201a3233d74bf0007a46979f5c5a5c7177d368fe5f77eb37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3b7c032364962c991180de3f2f83db0f5a4b7735ba6f1012c01d451774ad0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3b7c032364962c991180de3f2f83db0f5a4b7735ba6f1012c01d451774ad0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7e0cab6cc8e0a21837eadc200f79a27f0d5a832a9476d96babf08069faa2b7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0cab6cc8e0a21837eadc200f79a27f0d5a832a9476d96babf08069faa2b7f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.296153 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.307838 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.323818 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.340849 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5054d3fc6b9cf72702837d05c1b24676eb30cb25bbf0e66d2ba8ca1e0f62f2b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd19190708c53d5af347007861e75a66b8e84f4ca10d0b7c295cb35ba458781b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.353835 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r52d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099a4607-5e9e-42d7-926d-56372fd5f23a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b337a08ec2ef87515a37269a43ce9aba5e0bc6265d4062a57d131a911632c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8wdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r52d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.377945 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c428279-629a-4fd5-9955-1598ed4f6f84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9f9a05af274542ac8504b8f3fe5b1dc25fa5c2de853c417a27af4676169d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:21:53Z\\\",\\\"message\\\":\\\"me:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:21:53.304958 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0201 14:21:53.305211 6528 handler.go:208] Removed *v1.Node event handler 2\\\\nI0201 14:21:53.305577 6528 factory.go:656] Stopping watch factory\\\\nI0201 14:21:53.305619 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:21:53.305660 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0201 14:21:53.305727 6528 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0201 14:21:53.305761 6528 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 33.822509ms\\\\nF0201 14:21:53.305776 6528 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T14:22:20Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 14:22:20.545621 6926 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0201 14:22:20.545644 6926 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0201 14:22:20.545664 6926 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0201 14:22:20.545723 6926 factory.go:1336] Added *v1.Node event handler 7\\\\nI0201 14:22:20.545750 6926 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0201 14:22:20.546031 6926 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0201 14:22:20.546100 6926 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0201 14:22:20.546126 6926 ovnkube.go:599] Stopped ovnkube\\\\nI0201 14:22:20.546146 6926 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0201 14:22:20.546204 6926 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lp6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m4skx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.386366 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.386553 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.386635 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.386697 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.386762 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.389640 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8befd56b-2ebe-48c7-9027-4f906b2e09d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smpkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dj7sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.404052 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a2fc544-f017-4d4e-8bbf-90397a6b121f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bd9f1e696151f8d88ef7193d851f793a4cde74941d6a7117d7835a10f51cfa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cb4038621f61856705fc0dbf8914a560d4a6fb25f44371d3147e4a962ce4a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a7a888a9beb01304fb4552ecf7a742a42b5d52e26f0c832f5cdfad29289da7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.420849 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.432767 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.445841 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l4chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bc978ee-ee67-4f69-8d91-361eb5b226fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4622b86969a461d424b68bc7d2a675c86d3fc67817f89ede9937048e0cdc09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l4chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.471605 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eab0d2e6-b6a3-4167-83a8-9a1e4662fa38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0756a1ed611ee30513307450b56a0a19ffdd6d4671c7ae1b9e59ead1bddd7564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c39b2e34508f9b18b5d851feabce4a6117eeb3e95172f538ec23b0d2dafd1294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whsvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:21:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ptp6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.488837 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.489103 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.489199 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.489279 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.489369 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.591754 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.591817 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.591833 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.591857 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.591900 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.693527 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.693560 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.693568 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.693581 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.693589 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.795824 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.795866 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.795892 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.795905 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.795912 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.898174 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.898207 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.898216 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.898230 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.898240 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:29Z","lastTransitionTime":"2026-02-01T14:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.999940 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:29.999988 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:29 crc kubenswrapper[4820]: I0201 14:22:30.000002 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.000023 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.000037 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.102558 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.102618 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.102641 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.102661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.102676 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.204679 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.204721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.204735 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.204756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.204769 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.224116 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:45:56.925630234 +0000 UTC Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.306850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.306914 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.306927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.306944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.306957 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.408781 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.408842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.408920 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.408945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.408961 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.511243 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.511298 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.511307 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.511321 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.511332 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.613768 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.613819 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.613830 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.613850 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.613863 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.716082 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.716127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.716139 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.716155 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.716166 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.817805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.817835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.817845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.817858 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.817866 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.920196 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.920229 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.920238 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.920252 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:30 crc kubenswrapper[4820]: I0201 14:22:30.920262 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:30Z","lastTransitionTime":"2026-02-01T14:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.022897 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.022935 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.022944 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.022962 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.022974 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.125645 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.125750 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.125774 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.125807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.125828 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.198318 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.198376 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.198407 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:31 crc kubenswrapper[4820]: E0201 14:22:31.198529 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.198573 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:31 crc kubenswrapper[4820]: E0201 14:22:31.198658 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:31 crc kubenswrapper[4820]: E0201 14:22:31.198822 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:31 crc kubenswrapper[4820]: E0201 14:22:31.199031 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.224236 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:13:11.117901612 +0000 UTC Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.228114 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.228160 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.228173 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.228187 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.228197 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.330748 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.330795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.330826 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.330847 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.330862 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.433061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.433124 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.433143 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.433165 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.433182 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.536149 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.536227 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.536246 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.536324 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.536347 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.639237 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.639269 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.639277 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.639289 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.639299 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.741668 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.741700 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.741709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.741721 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.741731 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.844449 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.844485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.844496 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.844510 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.844519 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.947131 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.947170 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.947181 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.947197 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:31 crc kubenswrapper[4820]: I0201 14:22:31.947209 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:31Z","lastTransitionTime":"2026-02-01T14:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.050003 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.050046 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.050055 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.050071 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.050082 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.152763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.152794 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.152802 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.152816 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.152827 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.225123 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:30:44.12849624 +0000 UTC Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.255358 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.255404 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.255417 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.255438 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.255452 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.357602 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.357646 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.357658 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.357674 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.357686 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.461499 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.461536 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.461550 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.461567 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.461579 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.564052 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.564089 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.564099 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.564115 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.564125 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.665709 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.665761 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.665771 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.665790 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.665803 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.768303 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.768360 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.768378 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.768400 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.768416 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.870812 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.870845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.870856 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.870891 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.870905 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.974889 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.974932 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.974943 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.974957 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:32 crc kubenswrapper[4820]: I0201 14:22:32.974968 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:32Z","lastTransitionTime":"2026-02-01T14:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.079074 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.079156 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.079193 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.079244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.079273 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.181706 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.181763 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.181778 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.181801 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.181815 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.198047 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.198147 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:33 crc kubenswrapper[4820]: E0201 14:22:33.198208 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.198218 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:33 crc kubenswrapper[4820]: E0201 14:22:33.198265 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:33 crc kubenswrapper[4820]: E0201 14:22:33.198298 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.198070 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:33 crc kubenswrapper[4820]: E0201 14:22:33.198366 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.226095 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:57:34.092014802 +0000 UTC Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.284245 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.284275 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.284284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.284297 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.284305 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.386954 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.387202 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.387254 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.387284 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.387306 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.490056 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.490134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.490147 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.490168 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.490182 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.592984 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.593041 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.593051 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.593069 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.593079 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.696389 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.696458 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.696471 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.696494 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.696509 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.798657 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.798693 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.798703 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.798716 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.798725 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.900989 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.901061 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.901084 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.901112 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:33 crc kubenswrapper[4820]: I0201 14:22:33.901136 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:33Z","lastTransitionTime":"2026-02-01T14:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.002747 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.002785 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.002795 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.002807 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.002820 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.105211 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.105250 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.105261 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.105277 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.105289 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.207737 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.207798 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.207815 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.207835 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.207850 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.227283 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:18:02.952549349 +0000 UTC Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.309805 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.309845 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.309854 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.309894 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.309908 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.411945 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.412007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.412018 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.412035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.412064 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.514541 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.514574 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.514582 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.514597 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.514609 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.616505 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.616555 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.616569 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.616589 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.616606 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.720842 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.720900 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.720912 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.720927 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.720937 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.823160 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.823192 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.823200 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.823213 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.823222 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.925485 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.925528 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.925539 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.925554 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:34 crc kubenswrapper[4820]: I0201 14:22:34.925566 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:34Z","lastTransitionTime":"2026-02-01T14:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.027634 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.028000 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.028011 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.028025 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.028034 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.129718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.129756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.129766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.129779 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.129788 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.198540 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.198674 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:35 crc kubenswrapper[4820]: E0201 14:22:35.198794 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.198841 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.199118 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:35 crc kubenswrapper[4820]: E0201 14:22:35.199361 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:35 crc kubenswrapper[4820]: E0201 14:22:35.199925 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:35 crc kubenswrapper[4820]: E0201 14:22:35.200063 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.200290 4820 scope.go:117] "RemoveContainer" containerID="8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331" Feb 01 14:22:35 crc kubenswrapper[4820]: E0201 14:22:35.200457 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.213836 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aea2d10-281a-4986-b42d-205f8c7c1272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T14:21:18Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0201 14:21:02.900463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 14:21:02.902303 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2918664182/tls.crt::/tmp/serving-cert-2918664182/tls.key\\\\\\\"\\\\nI0201 14:21:18.388066 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 14:21:18.394864 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 14:21:18.394908 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 14:21:18.395228 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 14:21:18.395242 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 14:21:18.411066 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0201 14:21:18.411083 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0201 14:21:18.411108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 14:21:18.411125 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 14:21:18.411132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 14:21:18.411137 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 14:21:18.411142 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0201 14:21:18.412971 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.226075 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b609742b-d084-426e-91b2-295a55029b93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T14:20:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e02daf42c16519959234d13012c85757024b28979b4bbc46ebef19fc7c57c6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa8913f9bae7df497b0a4124aa907e230e99e11f2b46b0b765adfba3cc7d9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8810f9d798db466017c050c1aee60e6c4dfb31028044e738b334806ba6c538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T14:21:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fbff194848e43b310b53537ffe0a2a2046761d8ac283714252f3117aa03a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T14:21:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T14:21:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T14:20:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.228393 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:43:22.908030828 +0000 UTC Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.232346 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.232394 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.232410 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.232432 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.232449 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.241643 4820 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T14:21:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T14:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.308294 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-r52d9" podStartSLOduration=76.30827367 podStartE2EDuration="1m16.30827367s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.289310306 +0000 UTC m=+96.809676590" watchObservedRunningTime="2026-02-01 14:22:35.30827367 +0000 UTC m=+96.828639954" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.308736 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zbtsv" podStartSLOduration=76.30872659 podStartE2EDuration="1m16.30872659s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.308138837 +0000 UTC m=+96.828505121" watchObservedRunningTime="2026-02-01 14:22:35.30872659 +0000 UTC m=+96.829092874" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.335107 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.335148 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.335157 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.335171 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.335179 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.348820 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.348797978 podStartE2EDuration="24.348797978s" podCreationTimestamp="2026-02-01 14:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.320169777 +0000 UTC m=+96.840536061" watchObservedRunningTime="2026-02-01 14:22:35.348797978 +0000 UTC m=+96.869164262" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.370621 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.370598076 podStartE2EDuration="8.370598076s" podCreationTimestamp="2026-02-01 14:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.348604413 +0000 UTC m=+96.868970707" watchObservedRunningTime="2026-02-01 14:22:35.370598076 +0000 UTC m=+96.890964360" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.396242 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.396215569 podStartE2EDuration="1m14.396215569s" podCreationTimestamp="2026-02-01 14:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.395974204 +0000 UTC m=+96.916340488" watchObservedRunningTime="2026-02-01 14:22:35.396215569 +0000 UTC m=+96.916581873" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.422731 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ptp6h" podStartSLOduration=75.422707843 podStartE2EDuration="1m15.422707843s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.422491628 +0000 UTC m=+96.942857912" watchObservedRunningTime="2026-02-01 14:22:35.422707843 +0000 UTC m=+96.943074127" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.437685 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.437733 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.437743 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.437759 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.437770 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.451000 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-l4chw" podStartSLOduration=76.450981035 podStartE2EDuration="1m16.450981035s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.44851615 +0000 UTC m=+96.968882444" watchObservedRunningTime="2026-02-01 14:22:35.450981035 +0000 UTC m=+96.971347319" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.459852 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podStartSLOduration=76.459830853 podStartE2EDuration="1m16.459830853s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.458941124 +0000 UTC m=+96.979307408" watchObservedRunningTime="2026-02-01 14:22:35.459830853 +0000 UTC m=+96.980197137" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.473668 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-q922s" podStartSLOduration=76.473649473 podStartE2EDuration="1m16.473649473s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:35.47351894 +0000 UTC m=+96.993885224" watchObservedRunningTime="2026-02-01 14:22:35.473649473 +0000 UTC m=+96.994015757" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.541035 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.541075 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.541083 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.541101 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.541111 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.642718 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.642756 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.642769 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.642784 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.642793 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.746955 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.747064 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.747123 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.747150 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.747207 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.852127 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.852224 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.852244 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.852270 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.852288 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.954220 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.954296 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.954320 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.954348 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:35 crc kubenswrapper[4820]: I0201 14:22:35.954372 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:35Z","lastTransitionTime":"2026-02-01T14:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.056766 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.056804 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.056829 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.056846 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.056855 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:36Z","lastTransitionTime":"2026-02-01T14:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.159079 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.159111 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.159121 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.159134 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.159142 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:36Z","lastTransitionTime":"2026-02-01T14:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.229151 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:36:25.222147797 +0000 UTC Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.261922 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.261972 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.261987 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.262007 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.262021 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:36Z","lastTransitionTime":"2026-02-01T14:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.333559 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.333631 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.333649 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.333673 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.333690 4820 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T14:22:36Z","lastTransitionTime":"2026-02-01T14:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.383571 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn"] Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.384137 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.388702 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.388969 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.389221 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.392579 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.414690 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.414664582 podStartE2EDuration="1m18.414664582s" podCreationTimestamp="2026-02-01 14:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:36.4123011 +0000 UTC m=+97.932667384" watchObservedRunningTime="2026-02-01 14:22:36.414664582 +0000 UTC m=+97.935030866" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.424976 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.424950163 podStartE2EDuration="45.424950163s" podCreationTimestamp="2026-02-01 14:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:36.424335589 +0000 UTC m=+97.944701873" watchObservedRunningTime="2026-02-01 14:22:36.424950163 +0000 UTC m=+97.945316487" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.457777 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9da05399-9e37-4bf2-9912-39b66533373c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.457838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9da05399-9e37-4bf2-9912-39b66533373c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.457900 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9da05399-9e37-4bf2-9912-39b66533373c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.457918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da05399-9e37-4bf2-9912-39b66533373c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.457951 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9da05399-9e37-4bf2-9912-39b66533373c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.558864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9da05399-9e37-4bf2-9912-39b66533373c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.558953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9da05399-9e37-4bf2-9912-39b66533373c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.558974 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9da05399-9e37-4bf2-9912-39b66533373c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.558991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da05399-9e37-4bf2-9912-39b66533373c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.559024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9da05399-9e37-4bf2-9912-39b66533373c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.559148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9da05399-9e37-4bf2-9912-39b66533373c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.559246 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9da05399-9e37-4bf2-9912-39b66533373c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.560751 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9da05399-9e37-4bf2-9912-39b66533373c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.570387 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da05399-9e37-4bf2-9912-39b66533373c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.579200 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9da05399-9e37-4bf2-9912-39b66533373c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g7bxn\" (UID: \"9da05399-9e37-4bf2-9912-39b66533373c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: I0201 14:22:36.702357 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" Feb 01 14:22:36 crc kubenswrapper[4820]: W0201 14:22:36.727625 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da05399_9e37_4bf2_9912_39b66533373c.slice/crio-161a844a9428b6886821d7317e0ae274e060537ce438f4857d41fda3fa9fc236 WatchSource:0}: Error finding container 161a844a9428b6886821d7317e0ae274e060537ce438f4857d41fda3fa9fc236: Status 404 returned error can't find the container with id 161a844a9428b6886821d7317e0ae274e060537ce438f4857d41fda3fa9fc236 Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.198372 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:37 crc kubenswrapper[4820]: E0201 14:22:37.199028 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.198469 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.198561 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:37 crc kubenswrapper[4820]: E0201 14:22:37.199690 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:37 crc kubenswrapper[4820]: E0201 14:22:37.199848 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.198401 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:37 crc kubenswrapper[4820]: E0201 14:22:37.200329 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.229635 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:38:20.336099329 +0000 UTC Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.229692 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.237015 4820 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.673417 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:37 crc kubenswrapper[4820]: E0201 14:22:37.673614 4820 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:22:37 crc kubenswrapper[4820]: E0201 14:22:37.673724 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs podName:8befd56b-2ebe-48c7-9027-4f906b2e09d5 nodeName:}" failed. No retries permitted until 2026-02-01 14:23:41.673691083 +0000 UTC m=+163.194057407 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs") pod "network-metrics-daemon-dj7sg" (UID: "8befd56b-2ebe-48c7-9027-4f906b2e09d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.684556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" event={"ID":"9da05399-9e37-4bf2-9912-39b66533373c","Type":"ContainerStarted","Data":"95e9e617e00d5bcf3dbf5b9f24ac4a95366ba8ee845536404886944fee3415aa"} Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.684674 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" event={"ID":"9da05399-9e37-4bf2-9912-39b66533373c","Type":"ContainerStarted","Data":"161a844a9428b6886821d7317e0ae274e060537ce438f4857d41fda3fa9fc236"} Feb 01 14:22:37 crc kubenswrapper[4820]: I0201 14:22:37.700189 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7bxn" podStartSLOduration=78.700160535 podStartE2EDuration="1m18.700160535s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:22:37.699389429 +0000 UTC m=+99.219755723" watchObservedRunningTime="2026-02-01 14:22:37.700160535 +0000 UTC m=+99.220526869" Feb 01 14:22:39 crc kubenswrapper[4820]: I0201 14:22:39.198679 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:39 crc kubenswrapper[4820]: I0201 14:22:39.198811 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:39 crc kubenswrapper[4820]: I0201 14:22:39.198848 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:39 crc kubenswrapper[4820]: I0201 14:22:39.198958 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:39 crc kubenswrapper[4820]: E0201 14:22:39.199664 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:39 crc kubenswrapper[4820]: E0201 14:22:39.199774 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:39 crc kubenswrapper[4820]: E0201 14:22:39.199921 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:39 crc kubenswrapper[4820]: E0201 14:22:39.199938 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:41 crc kubenswrapper[4820]: I0201 14:22:41.198209 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:41 crc kubenswrapper[4820]: I0201 14:22:41.198255 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:41 crc kubenswrapper[4820]: I0201 14:22:41.198209 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:41 crc kubenswrapper[4820]: I0201 14:22:41.198335 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:41 crc kubenswrapper[4820]: E0201 14:22:41.198496 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:41 crc kubenswrapper[4820]: E0201 14:22:41.198595 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:41 crc kubenswrapper[4820]: E0201 14:22:41.198786 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:41 crc kubenswrapper[4820]: E0201 14:22:41.198837 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:43 crc kubenswrapper[4820]: I0201 14:22:43.198171 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:43 crc kubenswrapper[4820]: I0201 14:22:43.198213 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:43 crc kubenswrapper[4820]: I0201 14:22:43.198280 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:43 crc kubenswrapper[4820]: E0201 14:22:43.198322 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:43 crc kubenswrapper[4820]: E0201 14:22:43.198418 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:43 crc kubenswrapper[4820]: I0201 14:22:43.198437 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:43 crc kubenswrapper[4820]: E0201 14:22:43.198511 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:43 crc kubenswrapper[4820]: E0201 14:22:43.198576 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:45 crc kubenswrapper[4820]: I0201 14:22:45.198211 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:45 crc kubenswrapper[4820]: I0201 14:22:45.198220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:45 crc kubenswrapper[4820]: I0201 14:22:45.198315 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:45 crc kubenswrapper[4820]: I0201 14:22:45.198328 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:45 crc kubenswrapper[4820]: E0201 14:22:45.198425 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:45 crc kubenswrapper[4820]: E0201 14:22:45.198535 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:45 crc kubenswrapper[4820]: E0201 14:22:45.198704 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:45 crc kubenswrapper[4820]: E0201 14:22:45.198828 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:47 crc kubenswrapper[4820]: I0201 14:22:47.198428 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:47 crc kubenswrapper[4820]: E0201 14:22:47.198852 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:47 crc kubenswrapper[4820]: I0201 14:22:47.198481 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:47 crc kubenswrapper[4820]: I0201 14:22:47.198509 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:47 crc kubenswrapper[4820]: I0201 14:22:47.198482 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:47 crc kubenswrapper[4820]: E0201 14:22:47.199110 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:47 crc kubenswrapper[4820]: E0201 14:22:47.199171 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:47 crc kubenswrapper[4820]: E0201 14:22:47.199276 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:49 crc kubenswrapper[4820]: I0201 14:22:49.198046 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:49 crc kubenswrapper[4820]: I0201 14:22:49.198008 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:49 crc kubenswrapper[4820]: I0201 14:22:49.198346 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:49 crc kubenswrapper[4820]: E0201 14:22:49.204117 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:49 crc kubenswrapper[4820]: I0201 14:22:49.204180 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:49 crc kubenswrapper[4820]: E0201 14:22:49.204233 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:49 crc kubenswrapper[4820]: E0201 14:22:49.204330 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:49 crc kubenswrapper[4820]: E0201 14:22:49.204367 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:49 crc kubenswrapper[4820]: I0201 14:22:49.204858 4820 scope.go:117] "RemoveContainer" containerID="8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331" Feb 01 14:22:49 crc kubenswrapper[4820]: E0201 14:22:49.205002 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m4skx_openshift-ovn-kubernetes(2c428279-629a-4fd5-9955-1598ed4f6f84)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" Feb 01 14:22:51 crc kubenswrapper[4820]: I0201 14:22:51.197794 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:51 crc kubenswrapper[4820]: I0201 14:22:51.197848 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:51 crc kubenswrapper[4820]: I0201 14:22:51.197862 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:51 crc kubenswrapper[4820]: E0201 14:22:51.197939 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:51 crc kubenswrapper[4820]: I0201 14:22:51.197960 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:51 crc kubenswrapper[4820]: E0201 14:22:51.198183 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:51 crc kubenswrapper[4820]: E0201 14:22:51.198180 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:51 crc kubenswrapper[4820]: E0201 14:22:51.198333 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:53 crc kubenswrapper[4820]: I0201 14:22:53.198194 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:53 crc kubenswrapper[4820]: I0201 14:22:53.198206 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:53 crc kubenswrapper[4820]: I0201 14:22:53.198214 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:53 crc kubenswrapper[4820]: I0201 14:22:53.198235 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:53 crc kubenswrapper[4820]: E0201 14:22:53.198429 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:53 crc kubenswrapper[4820]: E0201 14:22:53.198530 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:53 crc kubenswrapper[4820]: E0201 14:22:53.198644 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:53 crc kubenswrapper[4820]: E0201 14:22:53.198722 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:55 crc kubenswrapper[4820]: I0201 14:22:55.198522 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:55 crc kubenswrapper[4820]: I0201 14:22:55.198521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:55 crc kubenswrapper[4820]: I0201 14:22:55.198589 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:55 crc kubenswrapper[4820]: I0201 14:22:55.198663 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:55 crc kubenswrapper[4820]: E0201 14:22:55.198811 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:55 crc kubenswrapper[4820]: E0201 14:22:55.198922 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:55 crc kubenswrapper[4820]: E0201 14:22:55.199022 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:55 crc kubenswrapper[4820]: E0201 14:22:55.199116 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:56 crc kubenswrapper[4820]: I0201 14:22:56.741136 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/1.log" Feb 01 14:22:56 crc kubenswrapper[4820]: I0201 14:22:56.742189 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/0.log" Feb 01 14:22:56 crc kubenswrapper[4820]: I0201 14:22:56.742226 4820 generic.go:334] "Generic (PLEG): container finished" podID="20f8fae3-1755-461a-8748-a0033423ad5a" containerID="0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a" exitCode=1 Feb 01 14:22:56 crc kubenswrapper[4820]: I0201 14:22:56.742254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q922s" event={"ID":"20f8fae3-1755-461a-8748-a0033423ad5a","Type":"ContainerDied","Data":"0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a"} Feb 01 14:22:56 crc kubenswrapper[4820]: I0201 14:22:56.742283 4820 scope.go:117] "RemoveContainer" containerID="48d4bbf05284f1d827ffe8eb692adf5a868c7fea624cfb8ad370b5df02e29bc7" Feb 01 14:22:56 crc kubenswrapper[4820]: I0201 14:22:56.742641 4820 scope.go:117] "RemoveContainer" containerID="0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a" Feb 01 14:22:56 crc kubenswrapper[4820]: E0201 14:22:56.742776 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q922s_openshift-multus(20f8fae3-1755-461a-8748-a0033423ad5a)\"" pod="openshift-multus/multus-q922s" podUID="20f8fae3-1755-461a-8748-a0033423ad5a" Feb 01 14:22:57 crc kubenswrapper[4820]: I0201 14:22:57.198286 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:57 crc kubenswrapper[4820]: E0201 14:22:57.198495 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:57 crc kubenswrapper[4820]: I0201 14:22:57.198314 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:57 crc kubenswrapper[4820]: I0201 14:22:57.198576 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:57 crc kubenswrapper[4820]: I0201 14:22:57.198559 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:57 crc kubenswrapper[4820]: E0201 14:22:57.198654 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:57 crc kubenswrapper[4820]: E0201 14:22:57.198783 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:57 crc kubenswrapper[4820]: E0201 14:22:57.198978 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:57 crc kubenswrapper[4820]: I0201 14:22:57.748575 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/1.log" Feb 01 14:22:59 crc kubenswrapper[4820]: E0201 14:22:59.186280 4820 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 01 14:22:59 crc kubenswrapper[4820]: I0201 14:22:59.199034 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:22:59 crc kubenswrapper[4820]: I0201 14:22:59.199068 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:22:59 crc kubenswrapper[4820]: I0201 14:22:59.199087 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:22:59 crc kubenswrapper[4820]: E0201 14:22:59.199162 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:22:59 crc kubenswrapper[4820]: E0201 14:22:59.199258 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:22:59 crc kubenswrapper[4820]: E0201 14:22:59.199369 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:22:59 crc kubenswrapper[4820]: I0201 14:22:59.199405 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:22:59 crc kubenswrapper[4820]: E0201 14:22:59.199512 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:22:59 crc kubenswrapper[4820]: E0201 14:22:59.308702 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 14:23:01 crc kubenswrapper[4820]: I0201 14:23:01.198926 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:01 crc kubenswrapper[4820]: I0201 14:23:01.198944 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:01 crc kubenswrapper[4820]: I0201 14:23:01.199075 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:01 crc kubenswrapper[4820]: E0201 14:23:01.199203 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:23:01 crc kubenswrapper[4820]: I0201 14:23:01.199507 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:01 crc kubenswrapper[4820]: E0201 14:23:01.199607 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:23:01 crc kubenswrapper[4820]: E0201 14:23:01.199819 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:23:01 crc kubenswrapper[4820]: E0201 14:23:01.200074 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:23:02 crc kubenswrapper[4820]: I0201 14:23:02.199091 4820 scope.go:117] "RemoveContainer" containerID="8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331" Feb 01 14:23:02 crc kubenswrapper[4820]: I0201 14:23:02.765934 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/3.log" Feb 01 14:23:02 crc kubenswrapper[4820]: I0201 14:23:02.768626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerStarted","Data":"1fdff6e21a556b7683444220a0668c8589786a358238fba9043bb1e7ce3d8206"} Feb 01 14:23:02 crc kubenswrapper[4820]: I0201 14:23:02.769010 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:23:02 crc kubenswrapper[4820]: I0201 14:23:02.793622 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podStartSLOduration=103.793604805 podStartE2EDuration="1m43.793604805s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:02.79246587 +0000 UTC m=+124.312832154" watchObservedRunningTime="2026-02-01 14:23:02.793604805 +0000 UTC m=+124.313971089" Feb 01 14:23:03 crc kubenswrapper[4820]: I0201 14:23:03.088976 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dj7sg"] Feb 01 14:23:03 crc kubenswrapper[4820]: I0201 14:23:03.089105 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:03 crc kubenswrapper[4820]: E0201 14:23:03.089205 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:23:03 crc kubenswrapper[4820]: I0201 14:23:03.198497 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:03 crc kubenswrapper[4820]: I0201 14:23:03.198546 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:03 crc kubenswrapper[4820]: I0201 14:23:03.198747 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:03 crc kubenswrapper[4820]: E0201 14:23:03.198807 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:23:03 crc kubenswrapper[4820]: E0201 14:23:03.198929 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:23:03 crc kubenswrapper[4820]: E0201 14:23:03.199023 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:23:04 crc kubenswrapper[4820]: E0201 14:23:04.310014 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 14:23:05 crc kubenswrapper[4820]: I0201 14:23:05.198474 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:05 crc kubenswrapper[4820]: I0201 14:23:05.198526 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:05 crc kubenswrapper[4820]: I0201 14:23:05.198485 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:05 crc kubenswrapper[4820]: E0201 14:23:05.198642 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:23:05 crc kubenswrapper[4820]: E0201 14:23:05.198754 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:23:05 crc kubenswrapper[4820]: E0201 14:23:05.198958 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:23:05 crc kubenswrapper[4820]: I0201 14:23:05.199054 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:05 crc kubenswrapper[4820]: E0201 14:23:05.199198 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:23:07 crc kubenswrapper[4820]: I0201 14:23:07.198711 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:07 crc kubenswrapper[4820]: I0201 14:23:07.198779 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:07 crc kubenswrapper[4820]: I0201 14:23:07.198779 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:07 crc kubenswrapper[4820]: I0201 14:23:07.198846 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:07 crc kubenswrapper[4820]: E0201 14:23:07.198845 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:23:07 crc kubenswrapper[4820]: E0201 14:23:07.198958 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:23:07 crc kubenswrapper[4820]: E0201 14:23:07.199027 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:23:07 crc kubenswrapper[4820]: E0201 14:23:07.199132 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:23:08 crc kubenswrapper[4820]: I0201 14:23:08.198350 4820 scope.go:117] "RemoveContainer" containerID="0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a" Feb 01 14:23:08 crc kubenswrapper[4820]: I0201 14:23:08.787258 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/1.log" Feb 01 14:23:08 crc kubenswrapper[4820]: I0201 14:23:08.787816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q922s" event={"ID":"20f8fae3-1755-461a-8748-a0033423ad5a","Type":"ContainerStarted","Data":"afac57817affcc730677ddc4fa2a32dfe0dbe317a5a6f1b31a7e83564ca75eb7"} Feb 01 14:23:09 crc kubenswrapper[4820]: I0201 14:23:09.198109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:09 crc kubenswrapper[4820]: I0201 14:23:09.198196 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:09 crc kubenswrapper[4820]: I0201 14:23:09.198259 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:09 crc kubenswrapper[4820]: I0201 14:23:09.198316 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:09 crc kubenswrapper[4820]: E0201 14:23:09.200050 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:23:09 crc kubenswrapper[4820]: E0201 14:23:09.200084 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:23:09 crc kubenswrapper[4820]: E0201 14:23:09.200309 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:23:09 crc kubenswrapper[4820]: E0201 14:23:09.200363 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:23:09 crc kubenswrapper[4820]: E0201 14:23:09.310974 4820 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 14:23:11 crc kubenswrapper[4820]: I0201 14:23:11.198039 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:11 crc kubenswrapper[4820]: I0201 14:23:11.198103 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:11 crc kubenswrapper[4820]: I0201 14:23:11.198154 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:11 crc kubenswrapper[4820]: I0201 14:23:11.198191 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:11 crc kubenswrapper[4820]: E0201 14:23:11.198647 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:23:11 crc kubenswrapper[4820]: E0201 14:23:11.198423 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:23:11 crc kubenswrapper[4820]: E0201 14:23:11.198703 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:23:11 crc kubenswrapper[4820]: E0201 14:23:11.198780 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:23:14 crc kubenswrapper[4820]: I0201 14:23:13.198291 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:14 crc kubenswrapper[4820]: I0201 14:23:13.198339 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:14 crc kubenswrapper[4820]: I0201 14:23:13.198413 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:14 crc kubenswrapper[4820]: E0201 14:23:13.198425 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 14:23:14 crc kubenswrapper[4820]: E0201 14:23:13.198500 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 14:23:14 crc kubenswrapper[4820]: E0201 14:23:13.198555 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 14:23:14 crc kubenswrapper[4820]: I0201 14:23:13.198690 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:14 crc kubenswrapper[4820]: E0201 14:23:13.198808 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dj7sg" podUID="8befd56b-2ebe-48c7-9027-4f906b2e09d5" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.198534 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.198570 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.199065 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.199381 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.200644 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.203184 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.203380 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.203381 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.203464 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 01 14:23:15 crc kubenswrapper[4820]: I0201 14:23:15.203681 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 01 14:23:16 crc kubenswrapper[4820]: I0201 14:23:16.979661 4820 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.035508 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kh9nd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.037483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.044145 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.044584 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.044976 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.045167 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.045193 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.048012 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.050560 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.059978 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.060055 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.062093 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.062375 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6xsfq"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.063049 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.063613 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gwgjr"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.064746 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.065599 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.066733 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.067782 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.067945 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.068063 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.069986 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.071021 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.071153 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.071777 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.077488 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.077857 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.078233 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.078621 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f4mwh"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.078740 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.079310 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.083288 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.083442 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.083678 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.085848 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jzsbs"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.086395 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.086464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.086724 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.087038 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.087175 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.087342 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.087501 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.091172 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.091311 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.091650 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.091824 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.092003 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.092208 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.092374 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.092539 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.094270 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.094661 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.094850 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.095046 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.097844 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2b7xd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.098402 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.098532 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.099046 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.099164 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.100019 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.104943 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.105701 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-m7jf2"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.107772 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.108377 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.110394 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-j7nmb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.111286 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.110495 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.110722 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.113420 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.115954 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.116128 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.116257 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.116385 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.116519 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.116746 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.117051 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.118169 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.118318 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119257 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119362 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119434 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119481 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119650 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119694 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119756 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119847 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119871 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119861 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119992 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.120005 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.120088 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.120116 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.119650 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.120189 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.120245 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.120251 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.120296 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.121190 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.121616 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.122079 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.122443 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.122888 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.123118 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.122087 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.123458 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.123608 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.123670 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.123936 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.131616 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.132418 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.132609 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.133213 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.135129 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.136829 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.137079 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.137337 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.139298 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.151152 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.153742 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzchg"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.154521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.154962 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.155822 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.155868 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.156624 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.159295 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.160236 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.160817 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.161819 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.162502 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.163237 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.164981 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t85m5"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.165849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.169484 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5rtqk"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.170348 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.174164 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.174839 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.175012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.175988 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.176195 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.176517 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.177414 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.177941 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.178467 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.180168 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.182028 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kh9nd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.182203 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.183219 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.184046 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.185515 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.191702 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6ttx5"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.193830 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.198297 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.199154 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.199721 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.205074 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.207606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.208705 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76eca8a3-6f22-4275-b05a-51b795162ce3-auth-proxy-config\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.208840 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91004505-bf59-410e-831a-62e980857994-serving-cert\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.209848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-oauth-serving-cert\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.209931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj48d\" (UniqueName: \"kubernetes.io/projected/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-kube-api-access-tj48d\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.209963 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2d6da6-c397-4077-81b1-d5b492811214-serving-cert\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.209994 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-etcd-client\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210021 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ae180b6-b7aa-413f-bf77-a9cad76c629e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210060 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daddc7f-d4d1-4682-97c2-b10266a3ab44-config\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cpfk\" (UniqueName: \"kubernetes.io/projected/6b6e0b13-d22d-412b-917d-4601a2421b6b-kube-api-access-2cpfk\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210129 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stzjv\" (UniqueName: \"kubernetes.io/projected/3449e7b8-24df-4789-959f-4ac101303cc2-kube-api-access-stzjv\") pod \"downloads-7954f5f757-m7jf2\" (UID: \"3449e7b8-24df-4789-959f-4ac101303cc2\") " pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pswxw\" (UniqueName: \"kubernetes.io/projected/3daddc7f-d4d1-4682-97c2-b10266a3ab44-kube-api-access-pswxw\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210185 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7lzk\" (UniqueName: \"kubernetes.io/projected/7ef76960-0097-4477-ae8f-0f6ddb18920b-kube-api-access-x7lzk\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210211 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91004505-bf59-410e-831a-62e980857994-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210239 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-audit\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmmh\" (UniqueName: \"kubernetes.io/projected/e8c3f194-8abb-4c41-8418-169b11d6afd2-kube-api-access-2rmmh\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210339 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76eca8a3-6f22-4275-b05a-51b795162ce3-machine-approver-tls\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-serving-cert\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210423 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e187fc4-2932-4e70-81d7-34fe2c16dcda-audit-dir\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210448 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6e0b13-d22d-412b-917d-4601a2421b6b-serving-cert\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210529 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-client-ca\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210552 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-serving-cert\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210577 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ae180b6-b7aa-413f-bf77-a9cad76c629e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-config\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ef76960-0097-4477-ae8f-0f6ddb18920b-serving-cert\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhh8\" (UniqueName: \"kubernetes.io/projected/4e187fc4-2932-4e70-81d7-34fe2c16dcda-kube-api-access-glhh8\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210740 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-config\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210767 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76eca8a3-6f22-4275-b05a-51b795162ce3-config\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210793 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210822 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-image-import-ca\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.210974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-encryption-config\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.211010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3daddc7f-d4d1-4682-97c2-b10266a3ab44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.211169 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gvl\" (UniqueName: \"kubernetes.io/projected/df0df42a-d0cc-4564-856d-a0d3ace0021f-kube-api-access-j2gvl\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.211214 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.211265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-client-ca\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.211300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-oauth-config\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.211598 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0df42a-d0cc-4564-856d-a0d3ace0021f-serving-cert\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.211702 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-etcd-serving-ca\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.211757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-config\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.218064 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222549 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-service-ca\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222652 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-audit-policies\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222685 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222715 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-dir\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222852 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222909 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222939 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrhk\" (UniqueName: \"kubernetes.io/projected/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-kube-api-access-fzrhk\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.222996 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6e0b13-d22d-412b-917d-4601a2421b6b-config\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.223025 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6mg\" (UniqueName: \"kubernetes.io/projected/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-kube-api-access-nv6mg\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.223050 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.223805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3daddc7f-d4d1-4682-97c2-b10266a3ab44-images\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.223958 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.223990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8h7r\" (UniqueName: \"kubernetes.io/projected/8c2d6da6-c397-4077-81b1-d5b492811214-kube-api-access-d8h7r\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224021 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224054 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-config\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224076 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-etcd-client\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-serving-cert\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224132 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxpbm\" (UniqueName: \"kubernetes.io/projected/91004505-bf59-410e-831a-62e980857994-kube-api-access-lxpbm\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224161 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8c3f194-8abb-4c41-8418-169b11d6afd2-node-pullsecrets\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-encryption-config\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224238 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224306 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-config\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xh2v\" (UniqueName: \"kubernetes.io/projected/76eca8a3-6f22-4275-b05a-51b795162ce3-kube-api-access-5xh2v\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-policies\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8c3f194-8abb-4c41-8418-169b11d6afd2-audit-dir\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.224416 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-service-ca-bundle\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.236087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgw7v\" (UniqueName: \"kubernetes.io/projected/5ae180b6-b7aa-413f-bf77-a9cad76c629e-kube-api-access-bgw7v\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.236219 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b6e0b13-d22d-412b-917d-4601a2421b6b-trusted-ca\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.250366 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.250393 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.250411 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.250427 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-trusted-ca-bundle\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.251224 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.252194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6xsfq"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.252312 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.252345 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.251580 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.252802 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.253518 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.253980 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.253671 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.254911 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.255447 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zckvh"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.255631 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.256571 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.256701 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.257864 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.257930 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.258519 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gwgjr"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.258590 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgsmb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.259911 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.258801 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.260330 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.260577 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.261339 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.261827 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.262142 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.262355 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.262507 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.262591 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.262705 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.264064 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.264830 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.268571 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2b7xd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.271911 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w4mdd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.272607 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.275316 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jzsbs"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.276406 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f4mwh"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.280785 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-njr47"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.281007 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.281709 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.282486 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzchg"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.283900 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j7nmb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.285286 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.288132 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m7jf2"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.290481 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.291726 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.293238 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.294583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.295513 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.296863 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.299296 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.300737 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zckvh"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.300930 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.302294 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.303666 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6ttx5"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.305306 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hkggw"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.306588 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.306831 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bmjtr"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.307894 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bmjtr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.308336 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t85m5"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.309104 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.310330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.311273 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.312357 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgsmb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.313288 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.314264 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.316512 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.318899 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.319924 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.322319 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.327662 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.331825 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.333442 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w4mdd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.334673 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.335894 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bmjtr"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.337815 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hkggw"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.339388 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k57sd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.340385 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.342459 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k57sd"] Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351278 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0df42a-d0cc-4564-856d-a0d3ace0021f-serving-cert\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-etcd-serving-ca\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-config\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351346 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-service-ca\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-audit-policies\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351400 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351414 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-dir\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351430 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351445 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351473 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrhk\" (UniqueName: \"kubernetes.io/projected/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-kube-api-access-fzrhk\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351488 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6e0b13-d22d-412b-917d-4601a2421b6b-config\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351506 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv6mg\" (UniqueName: \"kubernetes.io/projected/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-kube-api-access-nv6mg\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351521 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351535 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3daddc7f-d4d1-4682-97c2-b10266a3ab44-images\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351551 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351566 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8h7r\" (UniqueName: \"kubernetes.io/projected/8c2d6da6-c397-4077-81b1-d5b492811214-kube-api-access-d8h7r\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351581 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-config\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-etcd-client\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-serving-cert\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351641 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxpbm\" (UniqueName: \"kubernetes.io/projected/91004505-bf59-410e-831a-62e980857994-kube-api-access-lxpbm\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351656 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8c3f194-8abb-4c41-8418-169b11d6afd2-node-pullsecrets\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-encryption-config\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351700 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-config\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xh2v\" (UniqueName: \"kubernetes.io/projected/76eca8a3-6f22-4275-b05a-51b795162ce3-kube-api-access-5xh2v\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-policies\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351764 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8c3f194-8abb-4c41-8418-169b11d6afd2-audit-dir\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351778 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-service-ca-bundle\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgw7v\" (UniqueName: \"kubernetes.io/projected/5ae180b6-b7aa-413f-bf77-a9cad76c629e-kube-api-access-bgw7v\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b6e0b13-d22d-412b-917d-4601a2421b6b-trusted-ca\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.351825 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352479 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352542 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-trusted-ca-bundle\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352559 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76eca8a3-6f22-4275-b05a-51b795162ce3-auth-proxy-config\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91004505-bf59-410e-831a-62e980857994-serving-cert\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352583 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-config\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352592 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-oauth-serving-cert\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352592 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-config\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352620 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj48d\" (UniqueName: \"kubernetes.io/projected/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-kube-api-access-tj48d\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2d6da6-c397-4077-81b1-d5b492811214-serving-cert\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352684 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-etcd-client\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352743 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ae180b6-b7aa-413f-bf77-a9cad76c629e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daddc7f-d4d1-4682-97c2-b10266a3ab44-config\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352814 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpfk\" (UniqueName: \"kubernetes.io/projected/6b6e0b13-d22d-412b-917d-4601a2421b6b-kube-api-access-2cpfk\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352859 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stzjv\" (UniqueName: \"kubernetes.io/projected/3449e7b8-24df-4789-959f-4ac101303cc2-kube-api-access-stzjv\") pod \"downloads-7954f5f757-m7jf2\" (UID: \"3449e7b8-24df-4789-959f-4ac101303cc2\") " pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352886 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pswxw\" (UniqueName: \"kubernetes.io/projected/3daddc7f-d4d1-4682-97c2-b10266a3ab44-kube-api-access-pswxw\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352902 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7lzk\" (UniqueName: \"kubernetes.io/projected/7ef76960-0097-4477-ae8f-0f6ddb18920b-kube-api-access-x7lzk\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352896 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-audit-policies\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352921 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91004505-bf59-410e-831a-62e980857994-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352994 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-audit\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353023 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rmmh\" (UniqueName: \"kubernetes.io/projected/e8c3f194-8abb-4c41-8418-169b11d6afd2-kube-api-access-2rmmh\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76eca8a3-6f22-4275-b05a-51b795162ce3-machine-approver-tls\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353209 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-serving-cert\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e187fc4-2932-4e70-81d7-34fe2c16dcda-audit-dir\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353248 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6e0b13-d22d-412b-917d-4601a2421b6b-serving-cert\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353290 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3daddc7f-d4d1-4682-97c2-b10266a3ab44-images\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353328 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-client-ca\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353344 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-serving-cert\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353360 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ae180b6-b7aa-413f-bf77-a9cad76c629e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-config\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353393 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ef76960-0097-4477-ae8f-0f6ddb18920b-serving-cert\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353404 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91004505-bf59-410e-831a-62e980857994-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353432 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glhh8\" (UniqueName: \"kubernetes.io/projected/4e187fc4-2932-4e70-81d7-34fe2c16dcda-kube-api-access-glhh8\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353465 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-config\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76eca8a3-6f22-4275-b05a-51b795162ce3-config\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353501 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353519 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353539 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-image-import-ca\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353555 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-encryption-config\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353570 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3daddc7f-d4d1-4682-97c2-b10266a3ab44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2gvl\" (UniqueName: \"kubernetes.io/projected/df0df42a-d0cc-4564-856d-a0d3ace0021f-kube-api-access-j2gvl\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353632 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-client-ca\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.353647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-oauth-config\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.354115 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e187fc4-2932-4e70-81d7-34fe2c16dcda-audit-dir\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.354306 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-dir\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352369 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-service-ca\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.354482 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.354598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.355162 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6e0b13-d22d-412b-917d-4601a2421b6b-config\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.352040 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-etcd-serving-ca\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.355613 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.355759 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-client-ca\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.356220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-trusted-ca-bundle\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.356261 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.357125 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76eca8a3-6f22-4275-b05a-51b795162ce3-auth-proxy-config\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.357282 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ae180b6-b7aa-413f-bf77-a9cad76c629e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.357694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76eca8a3-6f22-4275-b05a-51b795162ce3-config\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.357836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8c3f194-8abb-4c41-8418-169b11d6afd2-audit-dir\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.359193 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8c3f194-8abb-4c41-8418-169b11d6afd2-node-pullsecrets\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.359373 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-service-ca-bundle\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.359742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-oauth-serving-cert\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.360031 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76eca8a3-6f22-4275-b05a-51b795162ce3-machine-approver-tls\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.360642 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-config\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.360946 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-policies\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.361005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef76960-0097-4477-ae8f-0f6ddb18920b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.361086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-config\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.361322 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b6e0b13-d22d-412b-917d-4601a2421b6b-trusted-ca\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.361338 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-serving-cert\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.361726 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-etcd-client\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.362025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91004505-bf59-410e-831a-62e980857994-serving-cert\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.362177 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0df42a-d0cc-4564-856d-a0d3ace0021f-serving-cert\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.362427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.362981 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.363034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-oauth-config\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.363281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-client-ca\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.363422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.363701 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-audit\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.363748 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-serving-cert\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.363815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.364252 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3daddc7f-d4d1-4682-97c2-b10266a3ab44-config\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.364617 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.364696 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.364799 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-encryption-config\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.364802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8c3f194-8abb-4c41-8418-169b11d6afd2-etcd-client\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.365185 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.365368 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3daddc7f-d4d1-4682-97c2-b10266a3ab44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.365385 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2d6da6-c397-4077-81b1-d5b492811214-serving-cert\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.365411 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e187fc4-2932-4e70-81d7-34fe2c16dcda-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.358477 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.365981 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-encryption-config\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.366034 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.366070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ef76960-0097-4477-ae8f-0f6ddb18920b-serving-cert\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.366237 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6e0b13-d22d-412b-917d-4601a2421b6b-serving-cert\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.366427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e187fc4-2932-4e70-81d7-34fe2c16dcda-serving-cert\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.367299 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.367399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ae180b6-b7aa-413f-bf77-a9cad76c629e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.367412 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-config\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.367721 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.380252 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.386191 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e8c3f194-8abb-4c41-8418-169b11d6afd2-image-import-ca\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.401468 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.420295 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.440358 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.460212 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.480487 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.500665 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.520393 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.540376 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.560258 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.579473 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.600162 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.620035 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.640450 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.659787 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.681055 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.701013 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.720386 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.740704 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.760167 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.780985 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.800030 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.819739 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.840765 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.860377 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.880669 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.901235 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.920674 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.940758 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.967933 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.979644 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 01 14:23:17 crc kubenswrapper[4820]: I0201 14:23:17.999837 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.040543 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.060468 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.080887 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.103760 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.121926 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.140998 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.160682 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.181058 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.200925 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.219840 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.241228 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.259075 4820 request.go:700] Waited for 1.003207458s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.261414 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.280252 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.340362 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.341306 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.341432 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.360913 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.381724 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.400654 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.420089 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.440244 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.468137 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.480392 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.501332 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.520088 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.541704 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.560645 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.580628 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.604930 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.622853 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.640975 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.660192 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.680897 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.699629 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.720835 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.740638 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.759857 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.780809 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.800770 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.820729 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.840662 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.860641 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.881083 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.902014 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.919891 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.941064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.961892 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 01 14:23:18 crc kubenswrapper[4820]: I0201 14:23:18.980549 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.000529 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.020285 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.040614 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.060638 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.080446 4820 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.100437 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.161915 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrhk\" (UniqueName: \"kubernetes.io/projected/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-kube-api-access-fzrhk\") pod \"oauth-openshift-558db77b4-f4mwh\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.174677 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rmmh\" (UniqueName: \"kubernetes.io/projected/e8c3f194-8abb-4c41-8418-169b11d6afd2-kube-api-access-2rmmh\") pod \"apiserver-76f77b778f-gwgjr\" (UID: \"e8c3f194-8abb-4c41-8418-169b11d6afd2\") " pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.196015 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8h7r\" (UniqueName: \"kubernetes.io/projected/8c2d6da6-c397-4077-81b1-d5b492811214-kube-api-access-d8h7r\") pod \"controller-manager-879f6c89f-jzsbs\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.227743 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv6mg\" (UniqueName: \"kubernetes.io/projected/d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d-kube-api-access-nv6mg\") pod \"openshift-apiserver-operator-796bbdcf4f-46bvd\" (UID: \"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.245622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glhh8\" (UniqueName: \"kubernetes.io/projected/4e187fc4-2932-4e70-81d7-34fe2c16dcda-kube-api-access-glhh8\") pod \"apiserver-7bbb656c7d-dgds8\" (UID: \"4e187fc4-2932-4e70-81d7-34fe2c16dcda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.253839 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.255721 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgw7v\" (UniqueName: \"kubernetes.io/projected/5ae180b6-b7aa-413f-bf77-a9cad76c629e-kube-api-access-bgw7v\") pod \"openshift-controller-manager-operator-756b6f6bc6-bktjb\" (UID: \"5ae180b6-b7aa-413f-bf77-a9cad76c629e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.273020 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.275838 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxpbm\" (UniqueName: \"kubernetes.io/projected/91004505-bf59-410e-831a-62e980857994-kube-api-access-lxpbm\") pod \"openshift-config-operator-7777fb866f-jc9mp\" (UID: \"91004505-bf59-410e-831a-62e980857994\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.279223 4820 request.go:700] Waited for 1.919569995s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.298471 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2gvl\" (UniqueName: \"kubernetes.io/projected/df0df42a-d0cc-4564-856d-a0d3ace0021f-kube-api-access-j2gvl\") pod \"route-controller-manager-6576b87f9c-95545\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.301327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.308438 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.315645 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xh2v\" (UniqueName: \"kubernetes.io/projected/76eca8a3-6f22-4275-b05a-51b795162ce3-kube-api-access-5xh2v\") pod \"machine-approver-56656f9798-n22zj\" (UID: \"76eca8a3-6f22-4275-b05a-51b795162ce3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.318421 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.348139 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj48d\" (UniqueName: \"kubernetes.io/projected/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-kube-api-access-tj48d\") pod \"console-f9d7485db-j7nmb\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.351032 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.356468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpfk\" (UniqueName: \"kubernetes.io/projected/6b6e0b13-d22d-412b-917d-4601a2421b6b-kube-api-access-2cpfk\") pod \"console-operator-58897d9998-2b7xd\" (UID: \"6b6e0b13-d22d-412b-917d-4601a2421b6b\") " pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.377008 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.393976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stzjv\" (UniqueName: \"kubernetes.io/projected/3449e7b8-24df-4789-959f-4ac101303cc2-kube-api-access-stzjv\") pod \"downloads-7954f5f757-m7jf2\" (UID: \"3449e7b8-24df-4789-959f-4ac101303cc2\") " pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.398634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pswxw\" (UniqueName: \"kubernetes.io/projected/3daddc7f-d4d1-4682-97c2-b10266a3ab44-kube-api-access-pswxw\") pod \"machine-api-operator-5694c8668f-kh9nd\" (UID: \"3daddc7f-d4d1-4682-97c2-b10266a3ab44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.432239 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7lzk\" (UniqueName: \"kubernetes.io/projected/7ef76960-0097-4477-ae8f-0f6ddb18920b-kube-api-access-x7lzk\") pod \"authentication-operator-69f744f599-6xsfq\" (UID: \"7ef76960-0097-4477-ae8f-0f6ddb18920b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.478612 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhppj\" (UniqueName: \"kubernetes.io/projected/db64eb7b-f370-4277-8983-f7c5e796466c-kube-api-access-zhppj\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.478653 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-certificates\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.478707 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e57d67e-a8ab-4574-91e4-958191d83ad5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.478781 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/29786e24-0b8b-48d2-9759-74b6e6011d53-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rgsmt\" (UID: \"29786e24-0b8b-48d2-9759-74b6e6011d53\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.478808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-default-certificate\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.478858 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7354f154-8dbb-407c-967c-02986c478d6c-config\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.478999 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/040a4a57-4d8b-4209-8330-a0e2f3195384-service-ca-bundle\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479098 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpq6d\" (UniqueName: \"kubernetes.io/projected/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-kube-api-access-wpq6d\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgjs\" (UniqueName: \"kubernetes.io/projected/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-kube-api-access-ljgjs\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479197 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ac3ec5b-0194-404e-876a-7431e3bd1c6e-metrics-tls\") pod \"dns-operator-744455d44c-t85m5\" (UID: \"9ac3ec5b-0194-404e-876a-7431e3bd1c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479255 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/198caafc-507d-4c27-87bb-3d99b541e58c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479299 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/198caafc-507d-4c27-87bb-3d99b541e58c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479329 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db64eb7b-f370-4277-8983-f7c5e796466c-serving-cert\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479369 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-stats-auth\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479392 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479440 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e180d67d-fdb1-4874-a793-abe25452fe6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479511 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479550 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hn4l\" (UniqueName: \"kubernetes.io/projected/29786e24-0b8b-48d2-9759-74b6e6011d53-kube-api-access-4hn4l\") pod \"cluster-samples-operator-665b6dd947-rgsmt\" (UID: \"29786e24-0b8b-48d2-9759-74b6e6011d53\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479818 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479857 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-bound-sa-token\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479913 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479952 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-config\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479967 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-tls\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.479984 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-client\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480015 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7354f154-8dbb-407c-967c-02986c478d6c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480047 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480095 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e57d67e-a8ab-4574-91e4-958191d83ad5-config\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-ca\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480166 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz24t\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-kube-api-access-xz24t\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480201 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv4v8\" (UniqueName: \"kubernetes.io/projected/9ac3ec5b-0194-404e-876a-7431e3bd1c6e-kube-api-access-dv4v8\") pod \"dns-operator-744455d44c-t85m5\" (UID: \"9ac3ec5b-0194-404e-876a-7431e3bd1c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl66r\" (UniqueName: \"kubernetes.io/projected/040a4a57-4d8b-4209-8330-a0e2f3195384-kube-api-access-cl66r\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480248 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-metrics-certs\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480281 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480335 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/198caafc-507d-4c27-87bb-3d99b541e58c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-service-ca\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e57d67e-a8ab-4574-91e4-958191d83ad5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7354f154-8dbb-407c-967c-02986c478d6c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e180d67d-fdb1-4874-a793-abe25452fe6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480523 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm4gr\" (UniqueName: \"kubernetes.io/projected/198caafc-507d-4c27-87bb-3d99b541e58c-kube-api-access-dm4gr\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.480570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-trusted-ca\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: E0201 14:23:19.483215 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:19.983200829 +0000 UTC m=+141.503567113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.493909 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.511446 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.516691 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.526117 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.556651 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.581301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.581468 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e57d67e-a8ab-4574-91e4-958191d83ad5-config\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.581491 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-csi-data-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.581512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdkt\" (UniqueName: \"kubernetes.io/projected/0ea50054-9a92-447e-aa45-115789f1cad3-kube-api-access-2rdkt\") pod \"migrator-59844c95c7-ssfjj\" (UID: \"0ea50054-9a92-447e-aa45-115789f1cad3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" Feb 01 14:23:19 crc kubenswrapper[4820]: E0201 14:23:19.581529 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.081508127 +0000 UTC m=+141.601874411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.581584 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62a55b26-3e3b-4eb0-bc0d-546c594f9384-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.581613 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-ca\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.581635 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn8zz\" (UniqueName: \"kubernetes.io/projected/7ff1096c-6211-4a8c-983d-a03517127437-kube-api-access-qn8zz\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.581651 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckc9b\" (UniqueName: \"kubernetes.io/projected/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-kube-api-access-ckc9b\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f2kg\" (UniqueName: \"kubernetes.io/projected/760d30a8-d961-45a0-9b33-142f45625c41-kube-api-access-5f2kg\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582107 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48b5b565-27da-43ed-9258-f836c7293930-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz24t\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-kube-api-access-xz24t\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582141 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e57d67e-a8ab-4574-91e4-958191d83ad5-config\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv4v8\" (UniqueName: \"kubernetes.io/projected/9ac3ec5b-0194-404e-876a-7431e3bd1c6e-kube-api-access-dv4v8\") pod \"dns-operator-744455d44c-t85m5\" (UID: \"9ac3ec5b-0194-404e-876a-7431e3bd1c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582181 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl66r\" (UniqueName: \"kubernetes.io/projected/040a4a57-4d8b-4209-8330-a0e2f3195384-kube-api-access-cl66r\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582197 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48b5b565-27da-43ed-9258-f836c7293930-proxy-tls\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582212 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23cb7dcf-9cf8-4d10-86dd-496077d38670-apiservice-cert\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582238 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-metrics-certs\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582273 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426hv\" (UniqueName: \"kubernetes.io/projected/0c41b98a-0076-4305-8540-4365c212bfd2-kube-api-access-426hv\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582298 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff1096c-6211-4a8c-983d-a03517127437-config\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jclwz\" (UniqueName: \"kubernetes.io/projected/e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d-kube-api-access-jclwz\") pod \"multus-admission-controller-857f4d67dd-zckvh\" (UID: \"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582332 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/198caafc-507d-4c27-87bb-3d99b541e58c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582348 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-service-ca\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582367 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/760d30a8-d961-45a0-9b33-142f45625c41-proxy-tls\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582395 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e57d67e-a8ab-4574-91e4-958191d83ad5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582411 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/386886d8-a869-46f4-b9b2-b5e142721ce2-signing-cabundle\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582427 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p45pt\" (UniqueName: \"kubernetes.io/projected/386886d8-a869-46f4-b9b2-b5e142721ce2-kube-api-access-p45pt\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7354f154-8dbb-407c-967c-02986c478d6c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582471 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e180d67d-fdb1-4874-a793-abe25452fe6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm4gr\" (UniqueName: \"kubernetes.io/projected/198caafc-507d-4c27-87bb-3d99b541e58c-kube-api-access-dm4gr\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582502 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/23cb7dcf-9cf8-4d10-86dd-496077d38670-tmpfs\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582521 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2ct\" (UniqueName: \"kubernetes.io/projected/5e9a3678-76c3-40e5-861d-3e8eb68cd783-kube-api-access-4x2ct\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnjbh\" (UID: \"5e9a3678-76c3-40e5-861d-3e8eb68cd783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582540 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-trusted-ca\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582555 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sc7h\" (UniqueName: \"kubernetes.io/projected/1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a-kube-api-access-5sc7h\") pod \"package-server-manager-789f6589d5-rbqv9\" (UID: \"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58dc7\" (UniqueName: \"kubernetes.io/projected/973ec7e3-13c9-47b8-b10e-5bff2619f164-kube-api-access-58dc7\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582588 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f268ee9-8110-411b-870e-30f49eac05e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.582603 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.583198 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e180d67d-fdb1-4874-a793-abe25452fe6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.584233 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-trusted-ca\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.584825 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhppj\" (UniqueName: \"kubernetes.io/projected/db64eb7b-f370-4277-8983-f7c5e796466c-kube-api-access-zhppj\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.584866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-certificates\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.584896 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e57d67e-a8ab-4574-91e4-958191d83ad5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.584958 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23cb7dcf-9cf8-4d10-86dd-496077d38670-webhook-cert\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.584976 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62a55b26-3e3b-4eb0-bc0d-546c594f9384-srv-cert\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.584995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48b5b565-27da-43ed-9258-f836c7293930-images\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.585011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rbqv9\" (UID: \"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.586723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-metrics-certs\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.587080 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.587283 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/29786e24-0b8b-48d2-9759-74b6e6011d53-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rgsmt\" (UID: \"29786e24-0b8b-48d2-9759-74b6e6011d53\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.587344 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-default-certificate\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.587520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-certificates\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.587685 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.588046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7354f154-8dbb-407c-967c-02986c478d6c-config\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.588104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e9a3678-76c3-40e5-861d-3e8eb68cd783-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnjbh\" (UID: \"5e9a3678-76c3-40e5-861d-3e8eb68cd783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.588306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/040a4a57-4d8b-4209-8330-a0e2f3195384-service-ca-bundle\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.588572 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7354f154-8dbb-407c-967c-02986c478d6c-config\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.588979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/040a4a57-4d8b-4209-8330-a0e2f3195384-service-ca-bundle\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589059 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpq6d\" (UniqueName: \"kubernetes.io/projected/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-kube-api-access-wpq6d\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589246 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-socket-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589464 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7354f154-8dbb-407c-967c-02986c478d6c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f268ee9-8110-411b-870e-30f49eac05e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589529 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-mountpoint-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589571 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgjs\" (UniqueName: \"kubernetes.io/projected/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-kube-api-access-ljgjs\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589619 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg7q9\" (UniqueName: \"kubernetes.io/projected/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-kube-api-access-pg7q9\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589671 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ac3ec5b-0194-404e-876a-7431e3bd1c6e-metrics-tls\") pod \"dns-operator-744455d44c-t85m5\" (UID: \"9ac3ec5b-0194-404e-876a-7431e3bd1c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589801 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcvqd\" (UniqueName: \"kubernetes.io/projected/62a55b26-3e3b-4eb0-bc0d-546c594f9384-kube-api-access-pcvqd\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/198caafc-507d-4c27-87bb-3d99b541e58c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589937 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c41b98a-0076-4305-8540-4365c212bfd2-srv-cert\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.589982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db64eb7b-f370-4277-8983-f7c5e796466c-serving-cert\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590015 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/198caafc-507d-4c27-87bb-3d99b541e58c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590067 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590108 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-stats-auth\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590151 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f268ee9-8110-411b-870e-30f49eac05e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590186 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/386886d8-a869-46f4-b9b2-b5e142721ce2-signing-key\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590228 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e180d67d-fdb1-4874-a793-abe25452fe6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590260 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-secret-volume\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590299 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff1096c-6211-4a8c-983d-a03517127437-serving-cert\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590322 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-plugins-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590357 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590380 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72158998-bd6f-45ff-b2dd-06e73ee5d53f-config-volume\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590407 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hn4l\" (UniqueName: \"kubernetes.io/projected/29786e24-0b8b-48d2-9759-74b6e6011d53-kube-api-access-4hn4l\") pod \"cluster-samples-operator-665b6dd947-rgsmt\" (UID: \"29786e24-0b8b-48d2-9759-74b6e6011d53\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590441 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmmvd\" (UniqueName: \"kubernetes.io/projected/23cb7dcf-9cf8-4d10-86dd-496077d38670-kube-api-access-qmmvd\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcjt\" (UniqueName: \"kubernetes.io/projected/706328b3-d6e3-40b7-8c5f-d475f19ed1fb-kube-api-access-8fcjt\") pod \"ingress-canary-bmjtr\" (UID: \"706328b3-d6e3-40b7-8c5f-d475f19ed1fb\") " pod="openshift-ingress-canary/ingress-canary-bmjtr" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-bound-sa-token\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590618 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcqwp\" (UniqueName: \"kubernetes.io/projected/72158998-bd6f-45ff-b2dd-06e73ee5d53f-kube-api-access-bcqwp\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590656 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqdr\" (UniqueName: \"kubernetes.io/projected/47bf7210-2d6d-484a-bac5-3847ea568287-kube-api-access-fhqdr\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590690 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zckvh\" (UID: \"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590722 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-config-volume\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/198caafc-507d-4c27-87bb-3d99b541e58c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590759 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/706328b3-d6e3-40b7-8c5f-d475f19ed1fb-cert\") pod \"ingress-canary-bmjtr\" (UID: \"706328b3-d6e3-40b7-8c5f-d475f19ed1fb\") " pod="openshift-ingress-canary/ingress-canary-bmjtr" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-config\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/29786e24-0b8b-48d2-9759-74b6e6011d53-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rgsmt\" (UID: \"29786e24-0b8b-48d2-9759-74b6e6011d53\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-node-bootstrap-token\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.590930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/760d30a8-d961-45a0-9b33-142f45625c41-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.591565 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e57d67e-a8ab-4574-91e4-958191d83ad5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.591799 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-tls\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.591910 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c41b98a-0076-4305-8540-4365c212bfd2-profile-collector-cert\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.592327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.592627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-client\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.592648 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-registration-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.592668 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7354f154-8dbb-407c-967c-02986c478d6c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.595219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-tls\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.595333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.595763 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.595843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.595864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-certs\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: E0201 14:23:19.596074 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.096063084 +0000 UTC m=+141.616429368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.607646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.608122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-stats-auth\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.608675 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db64eb7b-f370-4277-8983-f7c5e796466c-serving-cert\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.609429 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ac3ec5b-0194-404e-876a-7431e3bd1c6e-metrics-tls\") pod \"dns-operator-744455d44c-t85m5\" (UID: \"9ac3ec5b-0194-404e-876a-7431e3bd1c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.610468 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqfqp\" (UniqueName: \"kubernetes.io/projected/48b5b565-27da-43ed-9258-f836c7293930-kube-api-access-gqfqp\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.610569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72158998-bd6f-45ff-b2dd-06e73ee5d53f-metrics-tls\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.611598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/040a4a57-4d8b-4209-8330-a0e2f3195384-default-certificate\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.612335 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-client\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.612945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e180d67d-fdb1-4874-a793-abe25452fe6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.622171 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.642662 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz24t\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-kube-api-access-xz24t\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.651339 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.656830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv4v8\" (UniqueName: \"kubernetes.io/projected/9ac3ec5b-0194-404e-876a-7431e3bd1c6e-kube-api-access-dv4v8\") pod \"dns-operator-744455d44c-t85m5\" (UID: \"9ac3ec5b-0194-404e-876a-7431e3bd1c6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.658711 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.676814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl66r\" (UniqueName: \"kubernetes.io/projected/040a4a57-4d8b-4209-8330-a0e2f3195384-kube-api-access-cl66r\") pod \"router-default-5444994796-5rtqk\" (UID: \"040a4a57-4d8b-4209-8330-a0e2f3195384\") " pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.694897 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e57d67e-a8ab-4574-91e4-958191d83ad5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g52lb\" (UID: \"1e57d67e-a8ab-4574-91e4-958191d83ad5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711243 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:19 crc kubenswrapper[4820]: E0201 14:23:19.711434 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.211407805 +0000 UTC m=+141.731774089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-csi-data-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdkt\" (UniqueName: \"kubernetes.io/projected/0ea50054-9a92-447e-aa45-115789f1cad3-kube-api-access-2rdkt\") pod \"migrator-59844c95c7-ssfjj\" (UID: \"0ea50054-9a92-447e-aa45-115789f1cad3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711547 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62a55b26-3e3b-4eb0-bc0d-546c594f9384-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711569 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn8zz\" (UniqueName: \"kubernetes.io/projected/7ff1096c-6211-4a8c-983d-a03517127437-kube-api-access-qn8zz\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711587 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckc9b\" (UniqueName: \"kubernetes.io/projected/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-kube-api-access-ckc9b\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711616 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f2kg\" (UniqueName: \"kubernetes.io/projected/760d30a8-d961-45a0-9b33-142f45625c41-kube-api-access-5f2kg\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-csi-data-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711638 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48b5b565-27da-43ed-9258-f836c7293930-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711710 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23cb7dcf-9cf8-4d10-86dd-496077d38670-apiservice-cert\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711742 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48b5b565-27da-43ed-9258-f836c7293930-proxy-tls\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711768 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426hv\" (UniqueName: \"kubernetes.io/projected/0c41b98a-0076-4305-8540-4365c212bfd2-kube-api-access-426hv\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff1096c-6211-4a8c-983d-a03517127437-config\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711805 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jclwz\" (UniqueName: \"kubernetes.io/projected/e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d-kube-api-access-jclwz\") pod \"multus-admission-controller-857f4d67dd-zckvh\" (UID: \"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711832 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/760d30a8-d961-45a0-9b33-142f45625c41-proxy-tls\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p45pt\" (UniqueName: \"kubernetes.io/projected/386886d8-a869-46f4-b9b2-b5e142721ce2-kube-api-access-p45pt\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711893 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/386886d8-a869-46f4-b9b2-b5e142721ce2-signing-cabundle\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711927 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2ct\" (UniqueName: \"kubernetes.io/projected/5e9a3678-76c3-40e5-861d-3e8eb68cd783-kube-api-access-4x2ct\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnjbh\" (UID: \"5e9a3678-76c3-40e5-861d-3e8eb68cd783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711945 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/23cb7dcf-9cf8-4d10-86dd-496077d38670-tmpfs\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711968 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sc7h\" (UniqueName: \"kubernetes.io/projected/1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a-kube-api-access-5sc7h\") pod \"package-server-manager-789f6589d5-rbqv9\" (UID: \"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.711988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58dc7\" (UniqueName: \"kubernetes.io/projected/973ec7e3-13c9-47b8-b10e-5bff2619f164-kube-api-access-58dc7\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f268ee9-8110-411b-870e-30f49eac05e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62a55b26-3e3b-4eb0-bc0d-546c594f9384-srv-cert\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48b5b565-27da-43ed-9258-f836c7293930-images\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712089 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rbqv9\" (UID: \"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23cb7dcf-9cf8-4d10-86dd-496077d38670-webhook-cert\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712126 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e9a3678-76c3-40e5-861d-3e8eb68cd783-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnjbh\" (UID: \"5e9a3678-76c3-40e5-861d-3e8eb68cd783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-socket-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-mountpoint-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712228 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f268ee9-8110-411b-870e-30f49eac05e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712235 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48b5b565-27da-43ed-9258-f836c7293930-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712265 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg7q9\" (UniqueName: \"kubernetes.io/projected/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-kube-api-access-pg7q9\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712284 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcvqd\" (UniqueName: \"kubernetes.io/projected/62a55b26-3e3b-4eb0-bc0d-546c594f9384-kube-api-access-pcvqd\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c41b98a-0076-4305-8540-4365c212bfd2-srv-cert\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712331 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/386886d8-a869-46f4-b9b2-b5e142721ce2-signing-key\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f268ee9-8110-411b-870e-30f49eac05e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-secret-volume\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff1096c-6211-4a8c-983d-a03517127437-serving-cert\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712407 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-plugins-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff1096c-6211-4a8c-983d-a03517127437-config\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712960 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72158998-bd6f-45ff-b2dd-06e73ee5d53f-config-volume\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712428 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72158998-bd6f-45ff-b2dd-06e73ee5d53f-config-volume\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.712994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/23cb7dcf-9cf8-4d10-86dd-496077d38670-tmpfs\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713009 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmmvd\" (UniqueName: \"kubernetes.io/projected/23cb7dcf-9cf8-4d10-86dd-496077d38670-kube-api-access-qmmvd\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fcjt\" (UniqueName: \"kubernetes.io/projected/706328b3-d6e3-40b7-8c5f-d475f19ed1fb-kube-api-access-8fcjt\") pod \"ingress-canary-bmjtr\" (UID: \"706328b3-d6e3-40b7-8c5f-d475f19ed1fb\") " pod="openshift-ingress-canary/ingress-canary-bmjtr" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-socket-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713092 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcqwp\" (UniqueName: \"kubernetes.io/projected/72158998-bd6f-45ff-b2dd-06e73ee5d53f-kube-api-access-bcqwp\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713166 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqdr\" (UniqueName: \"kubernetes.io/projected/47bf7210-2d6d-484a-bac5-3847ea568287-kube-api-access-fhqdr\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/706328b3-d6e3-40b7-8c5f-d475f19ed1fb-cert\") pod \"ingress-canary-bmjtr\" (UID: \"706328b3-d6e3-40b7-8c5f-d475f19ed1fb\") " pod="openshift-ingress-canary/ingress-canary-bmjtr" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713213 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zckvh\" (UID: \"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713228 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-config-volume\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713250 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-node-bootstrap-token\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713267 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/760d30a8-d961-45a0-9b33-142f45625c41-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713287 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c41b98a-0076-4305-8540-4365c212bfd2-profile-collector-cert\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-registration-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713328 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-certs\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713348 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713366 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqfqp\" (UniqueName: \"kubernetes.io/projected/48b5b565-27da-43ed-9258-f836c7293930-kube-api-access-gqfqp\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72158998-bd6f-45ff-b2dd-06e73ee5d53f-metrics-tls\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.713829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.714216 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.714304 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-mountpoint-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.714527 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-registration-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.714593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/47bf7210-2d6d-484a-bac5-3847ea568287-plugins-dir\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:19 crc kubenswrapper[4820]: E0201 14:23:19.714656 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.214647302 +0000 UTC m=+141.735013586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.715362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48b5b565-27da-43ed-9258-f836c7293930-images\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.715655 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/760d30a8-d961-45a0-9b33-142f45625c41-proxy-tls\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.716629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f268ee9-8110-411b-870e-30f49eac05e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.716653 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/760d30a8-d961-45a0-9b33-142f45625c41-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.716987 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72158998-bd6f-45ff-b2dd-06e73ee5d53f-metrics-tls\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.717202 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rbqv9\" (UID: \"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.718390 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f268ee9-8110-411b-870e-30f49eac05e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.718443 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c41b98a-0076-4305-8540-4365c212bfd2-profile-collector-cert\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.718459 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-secret-volume\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.718515 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c41b98a-0076-4305-8540-4365c212bfd2-srv-cert\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.718667 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e9a3678-76c3-40e5-861d-3e8eb68cd783-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnjbh\" (UID: \"5e9a3678-76c3-40e5-861d-3e8eb68cd783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.718676 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48b5b565-27da-43ed-9258-f836c7293930-proxy-tls\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.719330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zckvh\" (UID: \"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.718792 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ff1096c-6211-4a8c-983d-a03517127437-serving-cert\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.720238 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62a55b26-3e3b-4eb0-bc0d-546c594f9384-srv-cert\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.720325 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.720450 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-node-bootstrap-token\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.721386 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-certs\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.721386 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/706328b3-d6e3-40b7-8c5f-d475f19ed1fb-cert\") pod \"ingress-canary-bmjtr\" (UID: \"706328b3-d6e3-40b7-8c5f-d475f19ed1fb\") " pod="openshift-ingress-canary/ingress-canary-bmjtr" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.721592 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.723448 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-ca\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.723581 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62a55b26-3e3b-4eb0-bc0d-546c594f9384-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.723639 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23cb7dcf-9cf8-4d10-86dd-496077d38670-webhook-cert\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.723862 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/386886d8-a869-46f4-b9b2-b5e142721ce2-signing-cabundle\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.724096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23cb7dcf-9cf8-4d10-86dd-496077d38670-apiservice-cert\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.725648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/386886d8-a869-46f4-b9b2-b5e142721ce2-signing-key\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.726043 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm4gr\" (UniqueName: \"kubernetes.io/projected/198caafc-507d-4c27-87bb-3d99b541e58c-kube-api-access-dm4gr\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.734124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/198caafc-507d-4c27-87bb-3d99b541e58c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.739990 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.757408 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpq6d\" (UniqueName: \"kubernetes.io/projected/03bbc152-7ab5-4be7-b3a7-c8f84b8acd14-kube-api-access-wpq6d\") pod \"kube-storage-version-migrator-operator-b67b599dd-sqtv9\" (UID: \"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.777534 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgjs\" (UniqueName: \"kubernetes.io/projected/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-kube-api-access-ljgjs\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.778934 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jzsbs"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.780411 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.781102 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gwgjr"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.784816 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f4mwh"] Feb 01 14:23:19 crc kubenswrapper[4820]: W0201 14:23:19.790258 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76eca8a3_6f22_4275_b05a_51b795162ce3.slice/crio-08be8aff5677efd8044ea00c84e198a3b931648b6cdcf9dae73e71ebfcac8e97 WatchSource:0}: Error finding container 08be8aff5677efd8044ea00c84e198a3b931648b6cdcf9dae73e71ebfcac8e97: Status 404 returned error can't find the container with id 08be8aff5677efd8044ea00c84e198a3b931648b6cdcf9dae73e71ebfcac8e97 Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.794240 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6xsfq"] Feb 01 14:23:19 crc kubenswrapper[4820]: W0201 14:23:19.796384 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c3f194_8abb_4c41_8418_169b11d6afd2.slice/crio-18768ad1fb07bb922c05193286fd55e72820472a3f5483bf85c41c26d2550b50 WatchSource:0}: Error finding container 18768ad1fb07bb922c05193286fd55e72820472a3f5483bf85c41c26d2550b50: Status 404 returned error can't find the container with id 18768ad1fb07bb922c05193286fd55e72820472a3f5483bf85c41c26d2550b50 Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.796829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/198caafc-507d-4c27-87bb-3d99b541e58c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-76qdl\" (UID: \"198caafc-507d-4c27-87bb-3d99b541e58c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.812489 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hn4l\" (UniqueName: \"kubernetes.io/projected/29786e24-0b8b-48d2-9759-74b6e6011d53-kube-api-access-4hn4l\") pod \"cluster-samples-operator-665b6dd947-rgsmt\" (UID: \"29786e24-0b8b-48d2-9759-74b6e6011d53\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.814433 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:19 crc kubenswrapper[4820]: E0201 14:23:19.814596 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.314573651 +0000 UTC m=+141.834939935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.814851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: E0201 14:23:19.815131 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.315115607 +0000 UTC m=+141.835481951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.827981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" event={"ID":"4e187fc4-2932-4e70-81d7-34fe2c16dcda","Type":"ContainerStarted","Data":"42429c9ec4c152487f0e7ac2bcfdb4515dd470e6c3480edb24bdc8974d7c199c"} Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.829511 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5rtqk" event={"ID":"040a4a57-4d8b-4209-8330-a0e2f3195384","Type":"ContainerStarted","Data":"155ff861b2bfc2931df41ee8caf0d89cd757a350794278ca314f31d6b6e6f233"} Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.833076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" event={"ID":"8c2d6da6-c397-4077-81b1-d5b492811214","Type":"ContainerStarted","Data":"e4ceb009e44bb9008f77aeef278ea3f33c6bd4bcd6bfd88827e8baf8eab1a8aa"} Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.837734 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-bound-sa-token\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.838449 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" event={"ID":"df0df42a-d0cc-4564-856d-a0d3ace0021f","Type":"ContainerStarted","Data":"2f6a650bcebbc2dbc33f52c64a6cb99f6b991b92c4c1f8de3504cb48b900b0bc"} Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.840338 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" event={"ID":"d0e5cde7-6e0f-4213-94d4-746cfdb568e9","Type":"ContainerStarted","Data":"ee056db5b4b303aef8dc37aed6ce42cabca130351d993b367ebda045eb9930be"} Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.841217 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" event={"ID":"7ef76960-0097-4477-ae8f-0f6ddb18920b","Type":"ContainerStarted","Data":"1d2024cbea2db77e7f3e6d134150c8159857a90c86dd87dbf6511efbde7a0fcb"} Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.850800 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" event={"ID":"91004505-bf59-410e-831a-62e980857994","Type":"ContainerStarted","Data":"16e54d7aca35d73f58533146c9237ce4cf6e7b1d9349c8769702f7bd609a3130"} Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.858274 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lznd8\" (UID: \"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.873925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7354f154-8dbb-407c-967c-02986c478d6c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2szbj\" (UID: \"7354f154-8dbb-407c-967c-02986c478d6c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.883339 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.887415 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j7nmb"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.915183 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdkt\" (UniqueName: \"kubernetes.io/projected/0ea50054-9a92-447e-aa45-115789f1cad3-kube-api-access-2rdkt\") pod \"migrator-59844c95c7-ssfjj\" (UID: \"0ea50054-9a92-447e-aa45-115789f1cad3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.917133 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:19 crc kubenswrapper[4820]: E0201 14:23:19.917789 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.417746269 +0000 UTC m=+141.938112553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.937630 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckc9b\" (UniqueName: \"kubernetes.io/projected/bc9dfe32-8826-439f-b09b-a4f1fb00cc5b-kube-api-access-ckc9b\") pod \"machine-config-server-njr47\" (UID: \"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b\") " pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.938626 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kh9nd"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.942792 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.967716 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m7jf2"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.967752 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2b7xd"] Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.975511 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-etcd-service-ca\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.975833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn8zz\" (UniqueName: \"kubernetes.io/projected/7ff1096c-6211-4a8c-983d-a03517127437-kube-api-access-qn8zz\") pod \"service-ca-operator-777779d784-nlsnf\" (UID: \"7ff1096c-6211-4a8c-983d-a03517127437\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.975929 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db64eb7b-f370-4277-8983-f7c5e796466c-config\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.979438 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-config-volume\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.983696 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhppj\" (UniqueName: \"kubernetes.io/projected/db64eb7b-f370-4277-8983-f7c5e796466c-kube-api-access-zhppj\") pod \"etcd-operator-b45778765-6ttx5\" (UID: \"db64eb7b-f370-4277-8983-f7c5e796466c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:19 crc kubenswrapper[4820]: I0201 14:23:19.984449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426hv\" (UniqueName: \"kubernetes.io/projected/0c41b98a-0076-4305-8540-4365c212bfd2-kube-api-access-426hv\") pod \"catalog-operator-68c6474976-xn6hf\" (UID: \"0c41b98a-0076-4305-8540-4365c212bfd2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.001854 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.002446 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.003953 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f2kg\" (UniqueName: \"kubernetes.io/projected/760d30a8-d961-45a0-9b33-142f45625c41-kube-api-access-5f2kg\") pod \"machine-config-controller-84d6567774-rfpxv\" (UID: \"760d30a8-d961-45a0-9b33-142f45625c41\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.008978 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.019819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.020278 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.520260268 +0000 UTC m=+142.040626562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.023745 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t85m5"] Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.024444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jclwz\" (UniqueName: \"kubernetes.io/projected/e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d-kube-api-access-jclwz\") pod \"multus-admission-controller-857f4d67dd-zckvh\" (UID: \"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.029541 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.034503 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p45pt\" (UniqueName: \"kubernetes.io/projected/386886d8-a869-46f4-b9b2-b5e142721ce2-kube-api-access-p45pt\") pod \"service-ca-9c57cc56f-w4mdd\" (UID: \"386886d8-a869-46f4-b9b2-b5e142721ce2\") " pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.049006 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.059130 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.061159 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb"] Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.062539 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2ct\" (UniqueName: \"kubernetes.io/projected/5e9a3678-76c3-40e5-861d-3e8eb68cd783-kube-api-access-4x2ct\") pod \"control-plane-machine-set-operator-78cbb6b69f-mnjbh\" (UID: \"5e9a3678-76c3-40e5-861d-3e8eb68cd783\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.093461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmmvd\" (UniqueName: \"kubernetes.io/projected/23cb7dcf-9cf8-4d10-86dd-496077d38670-kube-api-access-qmmvd\") pod \"packageserver-d55dfcdfc-gtlmh\" (UID: \"23cb7dcf-9cf8-4d10-86dd-496077d38670\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.097301 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.111518 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.112479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcqwp\" (UniqueName: \"kubernetes.io/projected/72158998-bd6f-45ff-b2dd-06e73ee5d53f-kube-api-access-bcqwp\") pod \"dns-default-hkggw\" (UID: \"72158998-bd6f-45ff-b2dd-06e73ee5d53f\") " pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.121006 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.121153 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.621126353 +0000 UTC m=+142.141492637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.121457 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.121842 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.621830992 +0000 UTC m=+142.142197276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.127837 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.132802 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.136693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fcjt\" (UniqueName: \"kubernetes.io/projected/706328b3-d6e3-40b7-8c5f-d475f19ed1fb-kube-api-access-8fcjt\") pod \"ingress-canary-bmjtr\" (UID: \"706328b3-d6e3-40b7-8c5f-d475f19ed1fb\") " pod="openshift-ingress-canary/ingress-canary-bmjtr" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.152211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sc7h\" (UniqueName: \"kubernetes.io/projected/1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a-kube-api-access-5sc7h\") pod \"package-server-manager-789f6589d5-rbqv9\" (UID: \"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.156648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58dc7\" (UniqueName: \"kubernetes.io/projected/973ec7e3-13c9-47b8-b10e-5bff2619f164-kube-api-access-58dc7\") pod \"marketplace-operator-79b997595-wgsmb\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.157730 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.161356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.169025 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.179165 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqdr\" (UniqueName: \"kubernetes.io/projected/47bf7210-2d6d-484a-bac5-3847ea568287-kube-api-access-fhqdr\") pod \"csi-hostpathplugin-k57sd\" (UID: \"47bf7210-2d6d-484a-bac5-3847ea568287\") " pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.184760 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.192303 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.195388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqfqp\" (UniqueName: \"kubernetes.io/projected/48b5b565-27da-43ed-9258-f836c7293930-kube-api-access-gqfqp\") pod \"machine-config-operator-74547568cd-g7b4m\" (UID: \"48b5b565-27da-43ed-9258-f836c7293930\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.199730 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njr47" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.206757 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.216012 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bmjtr" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.217066 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f268ee9-8110-411b-870e-30f49eac05e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gn8tb\" (UID: \"1f268ee9-8110-411b-870e-30f49eac05e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.222708 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.228236 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.728206984 +0000 UTC m=+142.248573278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.236352 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg7q9\" (UniqueName: \"kubernetes.io/projected/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-kube-api-access-pg7q9\") pod \"collect-profiles-29499255-6plmj\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.238076 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k57sd" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.255105 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcvqd\" (UniqueName: \"kubernetes.io/projected/62a55b26-3e3b-4eb0-bc0d-546c594f9384-kube-api-access-pcvqd\") pod \"olm-operator-6b444d44fb-5gzxg\" (UID: \"62a55b26-3e3b-4eb0-bc0d-546c594f9384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.259611 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl"] Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.286722 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj"] Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.324570 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.325009 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.824994791 +0000 UTC m=+142.345361095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.405010 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.418327 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.425747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.426130 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:20.926102682 +0000 UTC m=+142.446468966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.441069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.447059 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.474438 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.527685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.528080 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.028066917 +0000 UTC m=+142.548433201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: W0201 14:23:20.594595 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e57d67e_a8ab_4574_91e4_958191d83ad5.slice/crio-39f597742d24db2348ada0dc0ef38ed0b6b17ecc4419351bfcdff2ac502bda6c WatchSource:0}: Error finding container 39f597742d24db2348ada0dc0ef38ed0b6b17ecc4419351bfcdff2ac502bda6c: Status 404 returned error can't find the container with id 39f597742d24db2348ada0dc0ef38ed0b6b17ecc4419351bfcdff2ac502bda6c Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.628592 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.628746 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.128725887 +0000 UTC m=+142.649092181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.628842 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.629140 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.129130398 +0000 UTC m=+142.649496682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.729517 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.729739 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.229713135 +0000 UTC m=+142.750079419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.729992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.730331 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.230323261 +0000 UTC m=+142.750689545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.800167 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8"] Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.830644 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.830855 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.330829467 +0000 UTC m=+142.851195751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.831261 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.831603 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.331591597 +0000 UTC m=+142.851957871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.856816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" event={"ID":"9ac3ec5b-0194-404e-876a-7431e3bd1c6e","Type":"ContainerStarted","Data":"b6a5056ed23b6d851d4735cbd5ec7b313a5ca1391bba7b27121cea2b964cf257"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.858106 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m7jf2" event={"ID":"3449e7b8-24df-4789-959f-4ac101303cc2","Type":"ContainerStarted","Data":"530d5df3bbc6a34e3e22bb9085b0f5970e101f722768a673745e19f5f5dd4c9a"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.859725 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" event={"ID":"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d","Type":"ContainerStarted","Data":"d6cdbe3a4920962d964c13190d87284d5b8ec2d14e135151b307a3e8a3e10042"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.860672 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" event={"ID":"5ae180b6-b7aa-413f-bf77-a9cad76c629e","Type":"ContainerStarted","Data":"01d8b739e6ad27e983c3d537d4e435804eee2e2d1d9b888d9474ffdb015ae49e"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.862023 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" event={"ID":"e8c3f194-8abb-4c41-8418-169b11d6afd2","Type":"ContainerStarted","Data":"18768ad1fb07bb922c05193286fd55e72820472a3f5483bf85c41c26d2550b50"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.863353 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" event={"ID":"1e57d67e-a8ab-4574-91e4-958191d83ad5","Type":"ContainerStarted","Data":"39f597742d24db2348ada0dc0ef38ed0b6b17ecc4419351bfcdff2ac502bda6c"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.864241 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" event={"ID":"76eca8a3-6f22-4275-b05a-51b795162ce3","Type":"ContainerStarted","Data":"08be8aff5677efd8044ea00c84e198a3b931648b6cdcf9dae73e71ebfcac8e97"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.865305 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" event={"ID":"3daddc7f-d4d1-4682-97c2-b10266a3ab44","Type":"ContainerStarted","Data":"b68aee3d2fd70dc70ab94710c1d7577ef3e34e3a2c20dc2760ea183579fead79"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.865953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2b7xd" event={"ID":"6b6e0b13-d22d-412b-917d-4601a2421b6b","Type":"ContainerStarted","Data":"8ae017dd6db60cde6738792a3ea25f53d25a9b88e2ab2da4976ba66dbd77b2be"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.866646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j7nmb" event={"ID":"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e","Type":"ContainerStarted","Data":"df67af08971a11c33d3b0d7e6d5b8b495560cc48d4e7f9e278a09a75048545e2"} Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.933180 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.933311 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.433279695 +0000 UTC m=+142.953645979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:20 crc kubenswrapper[4820]: I0201 14:23:20.933515 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:20 crc kubenswrapper[4820]: E0201 14:23:20.933818 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.433789388 +0000 UTC m=+142.954155662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.035000 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.534976512 +0000 UTC m=+143.055342796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.034741 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.036339 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:21 crc kubenswrapper[4820]: W0201 14:23:21.036698 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod198caafc_507d_4c27_87bb_3d99b541e58c.slice/crio-553e9e30251968f7bd2e27a9e82bb5bf2c68a3f165c02f943e15166b516e7f62 WatchSource:0}: Error finding container 553e9e30251968f7bd2e27a9e82bb5bf2c68a3f165c02f943e15166b516e7f62: Status 404 returned error can't find the container with id 553e9e30251968f7bd2e27a9e82bb5bf2c68a3f165c02f943e15166b516e7f62 Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.036731 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.536714158 +0000 UTC m=+143.057080442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: W0201 14:23:21.044438 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7354f154_8dbb_407c_967c_02986c478d6c.slice/crio-5fd73fc50fab85da64036dc14aea734d5bef5ae91d7413ae44142af578cf45c5 WatchSource:0}: Error finding container 5fd73fc50fab85da64036dc14aea734d5bef5ae91d7413ae44142af578cf45c5: Status 404 returned error can't find the container with id 5fd73fc50fab85da64036dc14aea734d5bef5ae91d7413ae44142af578cf45c5 Feb 01 14:23:21 crc kubenswrapper[4820]: W0201 14:23:21.104028 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eeba0a6_1ffa_49e2_a3b0_2e19115bd24b.slice/crio-4473d7fab42ccb25777e968e8f98b4aef9befc0011a76aa7a40ccc86895c84b7 WatchSource:0}: Error finding container 4473d7fab42ccb25777e968e8f98b4aef9befc0011a76aa7a40ccc86895c84b7: Status 404 returned error can't find the container with id 4473d7fab42ccb25777e968e8f98b4aef9befc0011a76aa7a40ccc86895c84b7 Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.137675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.137836 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.63780955 +0000 UTC m=+143.158175844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.138203 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.138754 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.638743635 +0000 UTC m=+143.159109919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.239465 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.240373 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.739563859 +0000 UTC m=+143.259930143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.240413 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.240775 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.740768561 +0000 UTC m=+143.261134845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.323342 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt"] Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.341680 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.342143 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.842129819 +0000 UTC m=+143.362496103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.398050 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9"] Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.445146 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.445433 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:21.945422299 +0000 UTC m=+143.465788583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.463803 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6ttx5"] Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.545985 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.546159 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.04611569 +0000 UTC m=+143.566481974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.546398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.546771 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.046759307 +0000 UTC m=+143.567125591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.647271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.647494 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.147472998 +0000 UTC m=+143.667839292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.647683 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.647958 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.147951592 +0000 UTC m=+143.668317876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: W0201 14:23:21.671902 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc9dfe32_8826_439f_b09b_a4f1fb00cc5b.slice/crio-fe00df313f3e46b82766eb4d3b3b08cebed3a8650875e3a4924f2467e89afdc9 WatchSource:0}: Error finding container fe00df313f3e46b82766eb4d3b3b08cebed3a8650875e3a4924f2467e89afdc9: Status 404 returned error can't find the container with id fe00df313f3e46b82766eb4d3b3b08cebed3a8650875e3a4924f2467e89afdc9 Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.748521 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.748990 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.24894784 +0000 UTC m=+143.769314124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.850950 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.853912 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.353896944 +0000 UTC m=+143.874263228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.906474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" event={"ID":"198caafc-507d-4c27-87bb-3d99b541e58c","Type":"ContainerStarted","Data":"553e9e30251968f7bd2e27a9e82bb5bf2c68a3f165c02f943e15166b516e7f62"} Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.914706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" event={"ID":"7354f154-8dbb-407c-967c-02986c478d6c","Type":"ContainerStarted","Data":"5fd73fc50fab85da64036dc14aea734d5bef5ae91d7413ae44142af578cf45c5"} Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.915715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" event={"ID":"db64eb7b-f370-4277-8983-f7c5e796466c","Type":"ContainerStarted","Data":"9ab27f60428516d4ce777ff1ea5aee3b58e22eb3e6e4d4f9d6792822887632b7"} Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.917801 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" event={"ID":"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14","Type":"ContainerStarted","Data":"b13277006027fb5ca305f2770b43a6bfdc4af71bff2913da39d297e374d945bd"} Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.918519 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" event={"ID":"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b","Type":"ContainerStarted","Data":"4473d7fab42ccb25777e968e8f98b4aef9befc0011a76aa7a40ccc86895c84b7"} Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.947822 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" event={"ID":"df0df42a-d0cc-4564-856d-a0d3ace0021f","Type":"ContainerStarted","Data":"d1c47ac58b8769e4792389be6a922ea590c86e82e34a5619e49f67620066ceea"} Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.950147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njr47" event={"ID":"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b","Type":"ContainerStarted","Data":"fe00df313f3e46b82766eb4d3b3b08cebed3a8650875e3a4924f2467e89afdc9"} Feb 01 14:23:21 crc kubenswrapper[4820]: I0201 14:23:21.957833 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:21 crc kubenswrapper[4820]: E0201 14:23:21.958337 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.458316464 +0000 UTC m=+143.978682748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.058995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.059280 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.559269581 +0000 UTC m=+144.079635865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.109529 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.117686 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k57sd"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.120911 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.160604 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.160809 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.660776854 +0000 UTC m=+144.181143148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.161092 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.161591 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.661580855 +0000 UTC m=+144.181947229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.211204 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.213829 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bmjtr"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.240451 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.242800 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.262607 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.263088 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.763070127 +0000 UTC m=+144.283436411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: W0201 14:23:22.276259 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e9a3678_76c3_40e5_861d_3e8eb68cd783.slice/crio-8f9986dd1017ca8d339fd36111ddb34ccb44680b08dd378032da232ebd9de67a WatchSource:0}: Error finding container 8f9986dd1017ca8d339fd36111ddb34ccb44680b08dd378032da232ebd9de67a: Status 404 returned error can't find the container with id 8f9986dd1017ca8d339fd36111ddb34ccb44680b08dd378032da232ebd9de67a Feb 01 14:23:22 crc kubenswrapper[4820]: W0201 14:23:22.279327 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23cb7dcf_9cf8_4d10_86dd_496077d38670.slice/crio-a6272d2a9a99179ea3c63fe8c57346b0d0f045656f74f6202fa19c6982d2adff WatchSource:0}: Error finding container a6272d2a9a99179ea3c63fe8c57346b0d0f045656f74f6202fa19c6982d2adff: Status 404 returned error can't find the container with id a6272d2a9a99179ea3c63fe8c57346b0d0f045656f74f6202fa19c6982d2adff Feb 01 14:23:22 crc kubenswrapper[4820]: W0201 14:23:22.280191 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706328b3_d6e3_40b7_8c5f_d475f19ed1fb.slice/crio-8a2223ba5c65941f212e7e384dc8869a123c6e5f99f0cb76c8e5f686bd14a3a5 WatchSource:0}: Error finding container 8a2223ba5c65941f212e7e384dc8869a123c6e5f99f0cb76c8e5f686bd14a3a5: Status 404 returned error can't find the container with id 8a2223ba5c65941f212e7e384dc8869a123c6e5f99f0cb76c8e5f686bd14a3a5 Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.364352 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.364623 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.86461108 +0000 UTC m=+144.384977354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.404738 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.408270 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zckvh"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.410960 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv"] Feb 01 14:23:22 crc kubenswrapper[4820]: W0201 14:23:22.417359 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod760d30a8_d961_45a0_9b33_142f45625c41.slice/crio-4eb5a668850fd56e35ad33afed8412ccd5385d32078a60f406e110e8685ba118 WatchSource:0}: Error finding container 4eb5a668850fd56e35ad33afed8412ccd5385d32078a60f406e110e8685ba118: Status 404 returned error can't find the container with id 4eb5a668850fd56e35ad33afed8412ccd5385d32078a60f406e110e8685ba118 Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.424446 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-w4mdd"] Feb 01 14:23:22 crc kubenswrapper[4820]: W0201 14:23:22.428102 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c41b98a_0076_4305_8540_4365c212bfd2.slice/crio-26b3b817f3ec5a3bc53ecefeb8667c2e7505f3e63dfd3b4b31003f793f57f6d9 WatchSource:0}: Error finding container 26b3b817f3ec5a3bc53ecefeb8667c2e7505f3e63dfd3b4b31003f793f57f6d9: Status 404 returned error can't find the container with id 26b3b817f3ec5a3bc53ecefeb8667c2e7505f3e63dfd3b4b31003f793f57f6d9 Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.428322 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.465321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.465701 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:22.965685232 +0000 UTC m=+144.486051516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: W0201 14:23:22.467476 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod386886d8_a869_46f4_b9b2_b5e142721ce2.slice/crio-e091bce005d9a52bb3dce56954aa625131347f378f1f61303e090d5956ac3885 WatchSource:0}: Error finding container e091bce005d9a52bb3dce56954aa625131347f378f1f61303e090d5956ac3885: Status 404 returned error can't find the container with id e091bce005d9a52bb3dce56954aa625131347f378f1f61303e090d5956ac3885 Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.494098 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hkggw"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.506550 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.537935 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg"] Feb 01 14:23:22 crc kubenswrapper[4820]: W0201 14:23:22.540707 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72158998_bd6f_45ff_b2dd_06e73ee5d53f.slice/crio-ae4aef99de3d561d45de812cf3ff5af5a41ff5820a2ed7b2d184f31fd192a4fb WatchSource:0}: Error finding container ae4aef99de3d561d45de812cf3ff5af5a41ff5820a2ed7b2d184f31fd192a4fb: Status 404 returned error can't find the container with id ae4aef99de3d561d45de812cf3ff5af5a41ff5820a2ed7b2d184f31fd192a4fb Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.541805 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgsmb"] Feb 01 14:23:22 crc kubenswrapper[4820]: W0201 14:23:22.546684 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a55b26_3e3b_4eb0_bc0d_546c594f9384.slice/crio-55f30c817ca06f7dcd6bbeb808f1e129ee0f70eeea804072d654720ac2d22db8 WatchSource:0}: Error finding container 55f30c817ca06f7dcd6bbeb808f1e129ee0f70eeea804072d654720ac2d22db8: Status 404 returned error can't find the container with id 55f30c817ca06f7dcd6bbeb808f1e129ee0f70eeea804072d654720ac2d22db8 Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.552803 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj"] Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.566309 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.566602 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.066590508 +0000 UTC m=+144.586956792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.672014 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.672171 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.172149588 +0000 UTC m=+144.692515882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.672347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.672709 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.172697402 +0000 UTC m=+144.693063686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.772888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.773427 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.273411194 +0000 UTC m=+144.793777478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.874644 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.875049 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.37503448 +0000 UTC m=+144.895400764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.955735 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" event={"ID":"db64eb7b-f370-4277-8983-f7c5e796466c","Type":"ContainerStarted","Data":"aa45f8745835e20da5deff4272911ae5439761ce8c6199f1ef4524d029574c07"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.957076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" event={"ID":"7ef76960-0097-4477-ae8f-0f6ddb18920b","Type":"ContainerStarted","Data":"2ca3810e81a4e4244dcf6be83dcd80b8a684fa3ff510f013c78664c007a2afbf"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.958069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" event={"ID":"a6740627-6bd7-48f8-9dd8-ceccce34fc7f","Type":"ContainerStarted","Data":"c3b30b159dbefa4fd4062212a086136c56412c66b2552c062e1b223861893761"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.964914 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" event={"ID":"76eca8a3-6f22-4275-b05a-51b795162ce3","Type":"ContainerStarted","Data":"4ce48e953f23e72dff4cbb6c9f7716f09711c823d624fbee8773971edb8c6c9b"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.966813 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j7nmb" event={"ID":"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e","Type":"ContainerStarted","Data":"b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.968258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" event={"ID":"9ac3ec5b-0194-404e-876a-7431e3bd1c6e","Type":"ContainerStarted","Data":"f4ea2ff2d71fe0bbcffef40db32473458defadbfbe428cef50b1fcfb5675ad8c"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.968959 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" event={"ID":"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a","Type":"ContainerStarted","Data":"db0c22b10545f9f06d3c87a0c51c036ee0aacc82c3de4125e8200a55d57fdace"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.970090 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" event={"ID":"5ae180b6-b7aa-413f-bf77-a9cad76c629e","Type":"ContainerStarted","Data":"b4a9791a98911ce1dbbae82dedb7afb628c2a25659c97511970c3f1254bd2261"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.971912 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" event={"ID":"62a55b26-3e3b-4eb0-bc0d-546c594f9384","Type":"ContainerStarted","Data":"55f30c817ca06f7dcd6bbeb808f1e129ee0f70eeea804072d654720ac2d22db8"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.972772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" event={"ID":"d0e5cde7-6e0f-4213-94d4-746cfdb568e9","Type":"ContainerStarted","Data":"bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.973939 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.975302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2b7xd" event={"ID":"6b6e0b13-d22d-412b-917d-4601a2421b6b","Type":"ContainerStarted","Data":"7fc157a00c8b12a726deed1f928678e119bc6f4765caa92f212e85d02f0a94a2"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.975710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:22 crc kubenswrapper[4820]: E0201 14:23:22.976127 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.47611381 +0000 UTC m=+144.996480094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.977066 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k57sd" event={"ID":"47bf7210-2d6d-484a-bac5-3847ea568287","Type":"ContainerStarted","Data":"4d1d9474a381e9f87bdfdc135de2e64db0685ba8279e8c45c774f939aebb7b49"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.977179 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f4mwh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.977215 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" podUID="d0e5cde7-6e0f-4213-94d4-746cfdb568e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.979095 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" event={"ID":"386886d8-a869-46f4-b9b2-b5e142721ce2","Type":"ContainerStarted","Data":"e091bce005d9a52bb3dce56954aa625131347f378f1f61303e090d5956ac3885"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.980837 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njr47" event={"ID":"bc9dfe32-8826-439f-b09b-a4f1fb00cc5b","Type":"ContainerStarted","Data":"e8c70c58a85f1df1fdbad65626cbf479b8a26c39f5eece58d336359d4f736a3c"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.982092 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bmjtr" event={"ID":"706328b3-d6e3-40b7-8c5f-d475f19ed1fb","Type":"ContainerStarted","Data":"8a2223ba5c65941f212e7e384dc8869a123c6e5f99f0cb76c8e5f686bd14a3a5"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.983460 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" event={"ID":"973ec7e3-13c9-47b8-b10e-5bff2619f164","Type":"ContainerStarted","Data":"e04961e142979945e79e51c2733cbb7723cac94e8dc0b3ad0177a8d4772cfdc1"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.987120 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" event={"ID":"198caafc-507d-4c27-87bb-3d99b541e58c","Type":"ContainerStarted","Data":"5455c585a81a3ecf887a61d5fbb33d19049da054c0599c8cce29a1a227759c4b"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.988706 4820 generic.go:334] "Generic (PLEG): container finished" podID="4e187fc4-2932-4e70-81d7-34fe2c16dcda" containerID="b002c3e402e2ef094e9ffc8a933bcb2d1effd27d8e373f88733c57bdec16aba3" exitCode=0 Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.988750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" event={"ID":"4e187fc4-2932-4e70-81d7-34fe2c16dcda","Type":"ContainerDied","Data":"b002c3e402e2ef094e9ffc8a933bcb2d1effd27d8e373f88733c57bdec16aba3"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.993564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" event={"ID":"1f268ee9-8110-411b-870e-30f49eac05e6","Type":"ContainerStarted","Data":"590fd3b669571a00ce8229235dc9d62ddad4c19700bca1d945d30c0f5d02268c"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.995022 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" event={"ID":"29786e24-0b8b-48d2-9759-74b6e6011d53","Type":"ContainerStarted","Data":"a27e5423208732149d1ecefceae9eada0000cf41bf76e782664022026d0a1458"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.995183 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" podStartSLOduration=123.995167758 podStartE2EDuration="2m3.995167758s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:22.992843916 +0000 UTC m=+144.513210210" watchObservedRunningTime="2026-02-01 14:23:22.995167758 +0000 UTC m=+144.515534072" Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.995774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" event={"ID":"0c41b98a-0076-4305-8540-4365c212bfd2","Type":"ContainerStarted","Data":"26b3b817f3ec5a3bc53ecefeb8667c2e7505f3e63dfd3b4b31003f793f57f6d9"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.997502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" event={"ID":"7ff1096c-6211-4a8c-983d-a03517127437","Type":"ContainerStarted","Data":"3c5f3ea7ff4e1c23bf8dc42d09f6aec95b61cc330c8dbaeb5a383d6f248ef223"} Feb 01 14:23:22 crc kubenswrapper[4820]: I0201 14:23:22.997638 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" event={"ID":"7ff1096c-6211-4a8c-983d-a03517127437","Type":"ContainerStarted","Data":"97a7c518547e2460597a42d9adf64c362a36e236cb48bc656d07bc4935addadf"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.000358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5rtqk" event={"ID":"040a4a57-4d8b-4209-8330-a0e2f3195384","Type":"ContainerStarted","Data":"6217f595ba0e3dd205467af942fb95dd1f2697a29eb910998d42f1d1f3f1e45c"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.003409 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m7jf2" event={"ID":"3449e7b8-24df-4789-959f-4ac101303cc2","Type":"ContainerStarted","Data":"9aab66572e67af8077497567eb1e0b420b0816eb5188a7b9291b04d10be2dbdb"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.005439 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" event={"ID":"03bbc152-7ab5-4be7-b3a7-c8f84b8acd14","Type":"ContainerStarted","Data":"475c63d746a8e7f24b89a4af2a2b8119e661e38a454f02a13de2cc38b72b2268"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.006644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" event={"ID":"8c2d6da6-c397-4077-81b1-d5b492811214","Type":"ContainerStarted","Data":"71646fa262d84e8e47f34000feeeeb0f34845964dc4ae9f28d63a2b46058227b"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.006824 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.008374 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" event={"ID":"d0c60840-a26e-42ee-9c1a-d3d27f4a5b1d","Type":"ContainerStarted","Data":"5a73349cf80c051919f7b6229b48032ac1579ea4c2b25a125455a01f411d0f7d"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.010746 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" event={"ID":"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b","Type":"ContainerStarted","Data":"1b196d07d181521f4f40621d11078cc516f6912d588dbf484c534ba915cfcaec"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.011008 4820 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jzsbs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.011085 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.012199 4820 generic.go:334] "Generic (PLEG): container finished" podID="e8c3f194-8abb-4c41-8418-169b11d6afd2" containerID="eb80c3cfbe85f54f9480a8dd989c3e246bbc498bae315f31b708d6ddcf176ff9" exitCode=0 Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.012261 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" event={"ID":"e8c3f194-8abb-4c41-8418-169b11d6afd2","Type":"ContainerDied","Data":"eb80c3cfbe85f54f9480a8dd989c3e246bbc498bae315f31b708d6ddcf176ff9"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.013925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" event={"ID":"7354f154-8dbb-407c-967c-02986c478d6c","Type":"ContainerStarted","Data":"b13bd161cd6fb88a42665893cc7fa1daa905dde02440324dfa2d88c7bddb398f"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.014587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" event={"ID":"23cb7dcf-9cf8-4d10-86dd-496077d38670","Type":"ContainerStarted","Data":"a6272d2a9a99179ea3c63fe8c57346b0d0f045656f74f6202fa19c6982d2adff"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.016341 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" event={"ID":"3daddc7f-d4d1-4682-97c2-b10266a3ab44","Type":"ContainerStarted","Data":"8f3485f20876c2c8f3b6182114aa5af7014b0bfc5518f5bc606eed1a83d8f855"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.018652 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" event={"ID":"760d30a8-d961-45a0-9b33-142f45625c41","Type":"ContainerStarted","Data":"4eb5a668850fd56e35ad33afed8412ccd5385d32078a60f406e110e8685ba118"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.020443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" event={"ID":"48b5b565-27da-43ed-9258-f836c7293930","Type":"ContainerStarted","Data":"905eb1512e86b29bd8929ab3e812ff16a3c532bb133e290b10c53b9a7f85b2f1"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.020473 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" event={"ID":"48b5b565-27da-43ed-9258-f836c7293930","Type":"ContainerStarted","Data":"38d388903e73511a56b9ae73dc639bfae113a043d60163bfd6c82afbe2a1d461"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.021799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" event={"ID":"0ea50054-9a92-447e-aa45-115789f1cad3","Type":"ContainerStarted","Data":"4c6a01f8bf9d68f32011feb63ac901d46b416a0427e35a857de35929b2b588ea"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.021837 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" event={"ID":"0ea50054-9a92-447e-aa45-115789f1cad3","Type":"ContainerStarted","Data":"ef444f6c163eb6e19fd86656d37921b531add7ef28f36a53f595096d19f65093"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.022595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" event={"ID":"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d","Type":"ContainerStarted","Data":"9a915d89743d93b67de76a8e7bcaf6dbe92f71951a9dd648bf2c6646982d5c0e"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.023654 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" event={"ID":"1e57d67e-a8ab-4574-91e4-958191d83ad5","Type":"ContainerStarted","Data":"7f6a3beda0f3d7aa47eb6db4353b9638d140badd9d4c5eed0ceeb87aadd8cf27"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.024526 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hkggw" event={"ID":"72158998-bd6f-45ff-b2dd-06e73ee5d53f","Type":"ContainerStarted","Data":"ae4aef99de3d561d45de812cf3ff5af5a41ff5820a2ed7b2d184f31fd192a4fb"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.025777 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" event={"ID":"91004505-bf59-410e-831a-62e980857994","Type":"ContainerStarted","Data":"dfc4d47f846ac4a64c120a785dee23280d006d5afe49939195ac23812629f10b"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.027116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" event={"ID":"5e9a3678-76c3-40e5-861d-3e8eb68cd783","Type":"ContainerStarted","Data":"27bdb2eda49785d984b08f040305e25ab41c2e93d58f84442ac2d6d737c8c662"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.027223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" event={"ID":"5e9a3678-76c3-40e5-861d-3e8eb68cd783","Type":"ContainerStarted","Data":"8f9986dd1017ca8d339fd36111ddb34ccb44680b08dd378032da232ebd9de67a"} Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.027550 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.029326 4820 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-95545 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.029479 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.056146 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5rtqk" podStartSLOduration=123.05612706 podStartE2EDuration="2m3.05612706s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:23.055858283 +0000 UTC m=+144.576224577" watchObservedRunningTime="2026-02-01 14:23:23.05612706 +0000 UTC m=+144.576493334" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.077637 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.078802 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" podStartSLOduration=124.078784423 podStartE2EDuration="2m4.078784423s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:23.076591026 +0000 UTC m=+144.596957320" watchObservedRunningTime="2026-02-01 14:23:23.078784423 +0000 UTC m=+144.599150707" Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.079636 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.579623226 +0000 UTC m=+145.099989510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.098183 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" podStartSLOduration=123.098165949 podStartE2EDuration="2m3.098165949s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:23.095038796 +0000 UTC m=+144.615405080" watchObservedRunningTime="2026-02-01 14:23:23.098165949 +0000 UTC m=+144.618532233" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.181288 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.184582 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.682419312 +0000 UTC m=+145.202785596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.282665 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.283145 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.783129294 +0000 UTC m=+145.303495578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.404387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.404538 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.904514486 +0000 UTC m=+145.424880770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.405018 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.406049 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:23.906023616 +0000 UTC m=+145.426389970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.512572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.513183 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.013167099 +0000 UTC m=+145.533533383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.613822 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.618068 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.118050791 +0000 UTC m=+145.638417075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.718448 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.718757 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.218743972 +0000 UTC m=+145.739110256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.722492 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.726801 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.726842 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.819866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.821018 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.321001044 +0000 UTC m=+145.841367328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.920904 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.920976 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.420957145 +0000 UTC m=+145.941323439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:23 crc kubenswrapper[4820]: I0201 14:23:23.921833 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:23 crc kubenswrapper[4820]: E0201 14:23:23.922194 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.422180627 +0000 UTC m=+145.942546911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.022896 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.023083 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.523060143 +0000 UTC m=+146.043426427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.023430 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.023773 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.523748001 +0000 UTC m=+146.044114285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.043895 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" event={"ID":"9ac3ec5b-0194-404e-876a-7431e3bd1c6e","Type":"ContainerStarted","Data":"ee48508bba47615c15c312ae60610c5fbbd6c7b803278719ae90dd83c59acb14"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.050699 4820 generic.go:334] "Generic (PLEG): container finished" podID="91004505-bf59-410e-831a-62e980857994" containerID="dfc4d47f846ac4a64c120a785dee23280d006d5afe49939195ac23812629f10b" exitCode=0 Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.050760 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" event={"ID":"91004505-bf59-410e-831a-62e980857994","Type":"ContainerDied","Data":"dfc4d47f846ac4a64c120a785dee23280d006d5afe49939195ac23812629f10b"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.050785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" event={"ID":"91004505-bf59-410e-831a-62e980857994","Type":"ContainerStarted","Data":"d73e960e1a590739ed7902b084df6d8adc1a7ca9a3409503696a2e63984b47fb"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.051598 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.063571 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t85m5" podStartSLOduration=125.063555491 podStartE2EDuration="2m5.063555491s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.062225366 +0000 UTC m=+145.582591650" watchObservedRunningTime="2026-02-01 14:23:24.063555491 +0000 UTC m=+145.583921775" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.064368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" event={"ID":"760d30a8-d961-45a0-9b33-142f45625c41","Type":"ContainerStarted","Data":"3e85b01692b8a9b0a17e595e0fb95c1e5a72e65fc04f7935aa6fa2c94091cbc9"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.064421 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" event={"ID":"760d30a8-d961-45a0-9b33-142f45625c41","Type":"ContainerStarted","Data":"bcfa5300b8562ad260dbfbd6a9324c813b709bc472f60936e906f30e7b839d50"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.070049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" event={"ID":"0ea50054-9a92-447e-aa45-115789f1cad3","Type":"ContainerStarted","Data":"273c83e80e3256adb9486c5b591bb92660a16fba40e11a84467037ae000c7e6e"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.097724 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" event={"ID":"76eca8a3-6f22-4275-b05a-51b795162ce3","Type":"ContainerStarted","Data":"38af40db4137ac49fa9969cd5be47e20b46ed2fd8f29a130b2a91858f34b9ca7"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.100369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" event={"ID":"386886d8-a869-46f4-b9b2-b5e142721ce2","Type":"ContainerStarted","Data":"ade8cdf5fc1f39c49dfa01835cf07e1a4fb42f7a61752c33a45de2218952438f"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.106912 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" podStartSLOduration=125.106896805 podStartE2EDuration="2m5.106896805s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.105444317 +0000 UTC m=+145.625810601" watchObservedRunningTime="2026-02-01 14:23:24.106896805 +0000 UTC m=+145.627263089" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.107358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" event={"ID":"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d","Type":"ContainerStarted","Data":"c68940284fdfca61b7870e7e58b5a246e05fdef5ad8d3d11506d62d06739b965"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.108015 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" event={"ID":"e5053e28-cfcd-4e0b-aadc-18ed97ea8c3d","Type":"ContainerStarted","Data":"0f0d8c9cef21501253ab7b0fbd17a621f6c340ae8a805e14a2a29af659f9a98f"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.116695 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" event={"ID":"4e187fc4-2932-4e70-81d7-34fe2c16dcda","Type":"ContainerStarted","Data":"5f135fc9396a897f68f0090ac2eba2c6cb3825f367b2040ed5d587d14eea8d11"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.125121 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" event={"ID":"62a55b26-3e3b-4eb0-bc0d-546c594f9384","Type":"ContainerStarted","Data":"8ff774388977a9ba183930335ced9ee514cac18634e95aa929b9655d5c9a737c"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.126189 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.126381 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.626366544 +0000 UTC m=+146.146732828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.126734 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.126230 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.128895 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.628861549 +0000 UTC m=+146.149227833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.129301 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rfpxv" podStartSLOduration=124.129291341 podStartE2EDuration="2m4.129291341s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.124735219 +0000 UTC m=+145.645101493" watchObservedRunningTime="2026-02-01 14:23:24.129291341 +0000 UTC m=+145.649657625" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.130398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" event={"ID":"3daddc7f-d4d1-4682-97c2-b10266a3ab44","Type":"ContainerStarted","Data":"0ccee8b141312cb0b8af1936180da9fd8df7ec05e5d07413ca9cf794615eb6e9"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.136061 4820 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5gzxg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.136120 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" podUID="62a55b26-3e3b-4eb0-bc0d-546c594f9384" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.141986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hkggw" event={"ID":"72158998-bd6f-45ff-b2dd-06e73ee5d53f","Type":"ContainerStarted","Data":"7e807fb5316aefd3e7822e52cab7226b1666d2da9badd634a651faabe5c2af47"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.155752 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n22zj" podStartSLOduration=125.155731836 podStartE2EDuration="2m5.155731836s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.154033169 +0000 UTC m=+145.674399453" watchObservedRunningTime="2026-02-01 14:23:24.155731836 +0000 UTC m=+145.676098120" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.158984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" event={"ID":"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a","Type":"ContainerStarted","Data":"ed5ab78dc7e7fbaabf379b8c1e44d5d84c35e8ed9de9f508e1cd0501d8b798c0"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.159022 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" event={"ID":"1d6f1c0a-24b8-4c59-8ce5-7bf838a7836a","Type":"ContainerStarted","Data":"6490f2863530b2fb27159cf7b6a9c0ce91a47541de263c592ad3a5e032f707ed"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.159596 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.162549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bmjtr" event={"ID":"706328b3-d6e3-40b7-8c5f-d475f19ed1fb","Type":"ContainerStarted","Data":"de28a20d6e45109b6da6c3f5ac1f7306ca0e0eb5e0de19b3aae9843531123ebc"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.164363 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" event={"ID":"6eeba0a6-1ffa-49e2-a3b0-2e19115bd24b","Type":"ContainerStarted","Data":"e098ce52fbcfe425fd16775e5b4a28fafbd9b69c22eb151aae74c5abff9e5f26"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.165646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" event={"ID":"0c41b98a-0076-4305-8540-4365c212bfd2","Type":"ContainerStarted","Data":"177a35775674c12d053356d03ea9e4ebb0b75c5e89844028e83b558624e7df50"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.166367 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.169301 4820 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xn6hf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.169332 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" podUID="0c41b98a-0076-4305-8540-4365c212bfd2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.171127 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" event={"ID":"48b5b565-27da-43ed-9258-f836c7293930","Type":"ContainerStarted","Data":"0bc55f21bc271a2b4003ee6bccc67a1e6152db69cfc803d3a3830aaec6ab6f8b"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.186447 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" event={"ID":"23cb7dcf-9cf8-4d10-86dd-496077d38670","Type":"ContainerStarted","Data":"415f53cc9f8631401e0399c10fe7b5df97b0092a4b149f040cf67f66913ef289"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.187289 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.188375 4820 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gtlmh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.188408 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" podUID="23cb7dcf-9cf8-4d10-86dd-496077d38670" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.195637 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssfjj" podStartSLOduration=124.195619617 podStartE2EDuration="2m4.195619617s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.193933322 +0000 UTC m=+145.714299606" watchObservedRunningTime="2026-02-01 14:23:24.195619617 +0000 UTC m=+145.715985901" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.200104 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" event={"ID":"973ec7e3-13c9-47b8-b10e-5bff2619f164","Type":"ContainerStarted","Data":"b346df61d5c736e0515c1075d0d1bbb5fcae41532d5ff6c94b78ff3f5445cd50"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.200950 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.208280 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wgsmb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.208337 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.217699 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" event={"ID":"29786e24-0b8b-48d2-9759-74b6e6011d53","Type":"ContainerStarted","Data":"fe819aaa9eb980640a84228ca1ddcce511bb65c1402567b177209998626ca662"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.217753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" event={"ID":"29786e24-0b8b-48d2-9759-74b6e6011d53","Type":"ContainerStarted","Data":"abe101b01fb5111f3956a782cac198b725310836f66cc303ad1a0cbd35dc1174"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.223502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" event={"ID":"a6740627-6bd7-48f8-9dd8-ceccce34fc7f","Type":"ContainerStarted","Data":"e5ffadf679e3748ef24b06383df54de342b0be63fba86a3927aebe26fd368d18"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.228350 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.229759 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.729730165 +0000 UTC m=+146.250096449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.232018 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" event={"ID":"e8c3f194-8abb-4c41-8418-169b11d6afd2","Type":"ContainerStarted","Data":"fc1f781399cf4dcbca8dcaa7672a3dcc502bec5c9170d6394709f84f54ac2f49"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.235970 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" event={"ID":"1f268ee9-8110-411b-870e-30f49eac05e6","Type":"ContainerStarted","Data":"0bafa457287548b9655b0e2602c299c6a739065d328a98d68a1bd29b570eee58"} Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.236010 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.237318 4820 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f4mwh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.237363 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.237406 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.237363 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" podUID="d0e5cde7-6e0f-4213-94d4-746cfdb568e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.237625 4820 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jzsbs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.237651 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.237671 4820 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-95545 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.237697 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.255482 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.255544 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.258955 4820 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gwgjr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.259015 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" podUID="e8c3f194-8abb-4c41-8418-169b11d6afd2" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.259189 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-w4mdd" podStartSLOduration=124.25917281 podStartE2EDuration="2m4.25917281s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.249379239 +0000 UTC m=+145.769745523" watchObservedRunningTime="2026-02-01 14:23:24.25917281 +0000 UTC m=+145.779539094" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.273418 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.274958 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.275230 4820 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-dgds8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.13:8443/livez\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.275275 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" podUID="4e187fc4-2932-4e70-81d7-34fe2c16dcda" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.13:8443/livez\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.331833 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.343377 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.843361311 +0000 UTC m=+146.363727595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.377584 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh9nd" podStartSLOduration=124.377563451 podStartE2EDuration="2m4.377563451s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.29149771 +0000 UTC m=+145.811864004" watchObservedRunningTime="2026-02-01 14:23:24.377563451 +0000 UTC m=+145.897929745" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.378433 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" podStartSLOduration=124.378425464 podStartE2EDuration="2m4.378425464s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.375357612 +0000 UTC m=+145.895723896" watchObservedRunningTime="2026-02-01 14:23:24.378425464 +0000 UTC m=+145.898791748" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.404216 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6ttx5" podStartSLOduration=125.40419735 podStartE2EDuration="2m5.40419735s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.40340553 +0000 UTC m=+145.923771814" watchObservedRunningTime="2026-02-01 14:23:24.40419735 +0000 UTC m=+145.924563634" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.432669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.433004 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:24.932989127 +0000 UTC m=+146.453355411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.441933 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gn8tb" podStartSLOduration=124.441913924 podStartE2EDuration="2m4.441913924s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.440931568 +0000 UTC m=+145.961297862" watchObservedRunningTime="2026-02-01 14:23:24.441913924 +0000 UTC m=+145.962280198" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.471350 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-njr47" podStartSLOduration=7.471325367 podStartE2EDuration="7.471325367s" podCreationTimestamp="2026-02-01 14:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.464067314 +0000 UTC m=+145.984433628" watchObservedRunningTime="2026-02-01 14:23:24.471325367 +0000 UTC m=+145.991691651" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.507990 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" podStartSLOduration=125.507972983 podStartE2EDuration="2m5.507972983s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.507062199 +0000 UTC m=+146.027428483" watchObservedRunningTime="2026-02-01 14:23:24.507972983 +0000 UTC m=+146.028339267" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.535641 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-j7nmb" podStartSLOduration=125.535626939 podStartE2EDuration="2m5.535626939s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.527843561 +0000 UTC m=+146.048209845" watchObservedRunningTime="2026-02-01 14:23:24.535626939 +0000 UTC m=+146.055993223" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.535940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.536311 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.036296907 +0000 UTC m=+146.556663191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.567326 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" podStartSLOduration=124.567308013 podStartE2EDuration="2m4.567308013s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.563753548 +0000 UTC m=+146.084119832" watchObservedRunningTime="2026-02-01 14:23:24.567308013 +0000 UTC m=+146.087674297" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.585902 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hkggw" podStartSLOduration=7.585870457 podStartE2EDuration="7.585870457s" podCreationTimestamp="2026-02-01 14:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.585347402 +0000 UTC m=+146.105713676" watchObservedRunningTime="2026-02-01 14:23:24.585870457 +0000 UTC m=+146.106236741" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.608365 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bmjtr" podStartSLOduration=7.608344145 podStartE2EDuration="7.608344145s" podCreationTimestamp="2026-02-01 14:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.604965295 +0000 UTC m=+146.125331589" watchObservedRunningTime="2026-02-01 14:23:24.608344145 +0000 UTC m=+146.128710429" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.637546 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.638135 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.138120388 +0000 UTC m=+146.658486692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.651749 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-m7jf2" podStartSLOduration=125.65173248 podStartE2EDuration="2m5.65173248s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.649672445 +0000 UTC m=+146.170038739" watchObservedRunningTime="2026-02-01 14:23:24.65173248 +0000 UTC m=+146.172098764" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.710443 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" podStartSLOduration=124.710425133 podStartE2EDuration="2m4.710425133s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.710348771 +0000 UTC m=+146.230715055" watchObservedRunningTime="2026-02-01 14:23:24.710425133 +0000 UTC m=+146.230791417" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.711038 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mnjbh" podStartSLOduration=124.711029439 podStartE2EDuration="2m4.711029439s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.672187454 +0000 UTC m=+146.192553738" watchObservedRunningTime="2026-02-01 14:23:24.711029439 +0000 UTC m=+146.231395723" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.728863 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:24 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:24 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:24 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.728955 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.739924 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.740214 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.240203645 +0000 UTC m=+146.760569929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.743841 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2b7xd" podStartSLOduration=125.743822272 podStartE2EDuration="2m5.743822272s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.741937801 +0000 UTC m=+146.262304085" watchObservedRunningTime="2026-02-01 14:23:24.743822272 +0000 UTC m=+146.264188566" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.812518 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zckvh" podStartSLOduration=124.812499101 podStartE2EDuration="2m4.812499101s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.780442617 +0000 UTC m=+146.300808901" watchObservedRunningTime="2026-02-01 14:23:24.812499101 +0000 UTC m=+146.332865385" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.813007 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" podStartSLOduration=124.813000484 podStartE2EDuration="2m4.813000484s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.810191209 +0000 UTC m=+146.330557493" watchObservedRunningTime="2026-02-01 14:23:24.813000484 +0000 UTC m=+146.333366768" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.838700 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-76qdl" podStartSLOduration=125.838682777 podStartE2EDuration="2m5.838682777s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.834858716 +0000 UTC m=+146.355225000" watchObservedRunningTime="2026-02-01 14:23:24.838682777 +0000 UTC m=+146.359049061" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.844301 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.844503 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.344478282 +0000 UTC m=+146.864844566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.844774 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.845067 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.345059647 +0000 UTC m=+146.865425931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.899847 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nlsnf" podStartSLOduration=124.899830545 podStartE2EDuration="2m4.899830545s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.898717095 +0000 UTC m=+146.419083389" watchObservedRunningTime="2026-02-01 14:23:24.899830545 +0000 UTC m=+146.420196829" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.900180 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" podStartSLOduration=124.900176184 podStartE2EDuration="2m4.900176184s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.872339323 +0000 UTC m=+146.392705627" watchObservedRunningTime="2026-02-01 14:23:24.900176184 +0000 UTC m=+146.420542468" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.932719 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xsfq" podStartSLOduration=125.932701911 podStartE2EDuration="2m5.932701911s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.927919803 +0000 UTC m=+146.448286087" watchObservedRunningTime="2026-02-01 14:23:24.932701911 +0000 UTC m=+146.453068195" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.947123 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:24 crc kubenswrapper[4820]: E0201 14:23:24.947536 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.447517125 +0000 UTC m=+146.967883409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.961400 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lznd8" podStartSLOduration=125.961382284 podStartE2EDuration="2m5.961382284s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.960197612 +0000 UTC m=+146.480563896" watchObservedRunningTime="2026-02-01 14:23:24.961382284 +0000 UTC m=+146.481748568" Feb 01 14:23:24 crc kubenswrapper[4820]: I0201 14:23:24.994144 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bktjb" podStartSLOduration=125.994128115 podStartE2EDuration="2m5.994128115s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:24.994001842 +0000 UTC m=+146.514368126" watchObservedRunningTime="2026-02-01 14:23:24.994128115 +0000 UTC m=+146.514494399" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.061037 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.061560 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.561546731 +0000 UTC m=+147.081913015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.088743 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sqtv9" podStartSLOduration=125.088723054 podStartE2EDuration="2m5.088723054s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:25.042173425 +0000 UTC m=+146.562539709" watchObservedRunningTime="2026-02-01 14:23:25.088723054 +0000 UTC m=+146.609089338" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.091039 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g7b4m" podStartSLOduration=125.091028796 podStartE2EDuration="2m5.091028796s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:25.089563137 +0000 UTC m=+146.609929421" watchObservedRunningTime="2026-02-01 14:23:25.091028796 +0000 UTC m=+146.611395080" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.110239 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2szbj" podStartSLOduration=125.110225956 podStartE2EDuration="2m5.110225956s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:25.109786705 +0000 UTC m=+146.630152989" watchObservedRunningTime="2026-02-01 14:23:25.110225956 +0000 UTC m=+146.630592240" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.136724 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" podStartSLOduration=125.136706201 podStartE2EDuration="2m5.136706201s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:25.12841385 +0000 UTC m=+146.648780134" watchObservedRunningTime="2026-02-01 14:23:25.136706201 +0000 UTC m=+146.657072485" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.158141 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgsmt" podStartSLOduration=126.158124022 podStartE2EDuration="2m6.158124022s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:25.15730734 +0000 UTC m=+146.677673624" watchObservedRunningTime="2026-02-01 14:23:25.158124022 +0000 UTC m=+146.678490306" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.162067 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.162311 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.662285863 +0000 UTC m=+147.182652147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.162408 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.162730 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.662722884 +0000 UTC m=+147.183089168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.209092 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" podStartSLOduration=125.209074928 podStartE2EDuration="2m5.209074928s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:25.208136563 +0000 UTC m=+146.728502847" watchObservedRunningTime="2026-02-01 14:23:25.209074928 +0000 UTC m=+146.729441212" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.228062 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g52lb" podStartSLOduration=125.228041873 podStartE2EDuration="2m5.228041873s" podCreationTimestamp="2026-02-01 14:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:25.226112052 +0000 UTC m=+146.746478336" watchObservedRunningTime="2026-02-01 14:23:25.228041873 +0000 UTC m=+146.748408147" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.263382 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.273085 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.773040681 +0000 UTC m=+147.293406965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.273207 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hkggw" event={"ID":"72158998-bd6f-45ff-b2dd-06e73ee5d53f","Type":"ContainerStarted","Data":"475c2e841cfcbee9789f01c6063c4f8044e7a9dd51ab9868cb58d1827ce71a34"} Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.273916 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.288258 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-46bvd" podStartSLOduration=126.288236315 podStartE2EDuration="2m6.288236315s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:25.273289048 +0000 UTC m=+146.793655332" watchObservedRunningTime="2026-02-01 14:23:25.288236315 +0000 UTC m=+146.808602609" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.291002 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k57sd" event={"ID":"47bf7210-2d6d-484a-bac5-3847ea568287","Type":"ContainerStarted","Data":"c183e41f7c0660292de1bf2b3fb2fc23ac28785074be5c233e01697f3d1071e3"} Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.304502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" event={"ID":"e8c3f194-8abb-4c41-8418-169b11d6afd2","Type":"ContainerStarted","Data":"29b249cabbc2d26eedcf42ff7f84bf3eb1687be8f85b9e5f98c6154eb4cc0f64"} Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.314695 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.314763 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.327076 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wgsmb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.327130 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.338594 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5gzxg" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.370982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.396202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xn6hf" Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.397316 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.897298809 +0000 UTC m=+147.417665173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.471985 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.472369 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:25.972342947 +0000 UTC m=+147.492709231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.553007 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.572998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.573371 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.073356156 +0000 UTC m=+147.593722440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.674545 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.674938 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.17491134 +0000 UTC m=+147.695277614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.726423 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:25 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:25 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:25 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.726480 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.776000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.776435 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.276417192 +0000 UTC m=+147.796783546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.862101 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gtlmh" Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.877695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.878030 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.378010797 +0000 UTC m=+147.898377081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:25 crc kubenswrapper[4820]: I0201 14:23:25.979221 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:25 crc kubenswrapper[4820]: E0201 14:23:25.979616 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.479597301 +0000 UTC m=+147.999963635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.034751 4820 csr.go:261] certificate signing request csr-xfrjz is approved, waiting to be issued Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.054401 4820 csr.go:257] certificate signing request csr-xfrjz is issued Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.080066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.080426 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.580412315 +0000 UTC m=+148.100778599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.181249 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.181692 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.681679032 +0000 UTC m=+148.202045316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.282233 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.282411 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.782388363 +0000 UTC m=+148.302754647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.282509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.282793 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.782782913 +0000 UTC m=+148.303149187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.313446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k57sd" event={"ID":"47bf7210-2d6d-484a-bac5-3847ea568287","Type":"ContainerStarted","Data":"27498b3d78aa253b4df932c77c1f360203b52b13672af72f3162b87d58007dd0"} Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.314214 4820 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wgsmb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.314264 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.383451 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.383682 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.883658579 +0000 UTC m=+148.404024863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.384913 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.385771 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.885748174 +0000 UTC m=+148.406118498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.486304 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.486406 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.986390114 +0000 UTC m=+148.506756398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.486698 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.486973 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:26.986966229 +0000 UTC m=+148.507332513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.588630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.588830 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.08880535 +0000 UTC m=+148.609171634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.588951 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.589317 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.089308824 +0000 UTC m=+148.609675108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.647145 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6kqz"] Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.648227 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.653283 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.671183 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6kqz"] Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.692624 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.692814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-utilities\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.692915 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5ql\" (UniqueName: \"kubernetes.io/projected/3dee68b0-a47b-49fd-a889-7bf3bc58c380-kube-api-access-sm5ql\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.692969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-catalog-content\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.693019 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.192988894 +0000 UTC m=+148.713355178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.732774 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:26 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:26 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:26 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.732825 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.797303 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-catalog-content\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.797373 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-utilities\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.797517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm5ql\" (UniqueName: \"kubernetes.io/projected/3dee68b0-a47b-49fd-a889-7bf3bc58c380-kube-api-access-sm5ql\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.797555 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.797918 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.297906897 +0000 UTC m=+148.818273181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.798798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-catalog-content\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.799078 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-utilities\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.826679 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z5dnc"] Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.827623 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.844790 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5dnc"] Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.844994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm5ql\" (UniqueName: \"kubernetes.io/projected/3dee68b0-a47b-49fd-a889-7bf3bc58c380-kube-api-access-sm5ql\") pod \"community-operators-p6kqz\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.845102 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.899071 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.899555 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-utilities\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.899588 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-catalog-content\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.899628 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46s25\" (UniqueName: \"kubernetes.io/projected/5cd3df7b-e150-490b-9785-ccfab6b264b5-kube-api-access-46s25\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:26 crc kubenswrapper[4820]: E0201 14:23:26.899717 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.399702317 +0000 UTC m=+148.920068601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:26 crc kubenswrapper[4820]: I0201 14:23:26.963445 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.001257 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-utilities\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.001301 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-catalog-content\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.001337 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46s25\" (UniqueName: \"kubernetes.io/projected/5cd3df7b-e150-490b-9785-ccfab6b264b5-kube-api-access-46s25\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.001363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.001661 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.501651321 +0000 UTC m=+149.022017605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.002122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-utilities\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.002205 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-catalog-content\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.045073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46s25\" (UniqueName: \"kubernetes.io/projected/5cd3df7b-e150-490b-9785-ccfab6b264b5-kube-api-access-46s25\") pod \"certified-operators-z5dnc\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.047126 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dmllb"] Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.047983 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.055176 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-01 14:18:26 +0000 UTC, rotation deadline is 2026-11-30 18:47:37.46295801 +0000 UTC Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.055199 4820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7252h24m10.407760574s for next certificate rotation Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.072952 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmllb"] Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.102648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.102818 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.102859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-utilities\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.102891 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-catalog-content\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.102918 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.102941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.102984 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.103011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjjlw\" (UniqueName: \"kubernetes.io/projected/4fdec671-8c4f-4814-80f3-eb6580b4a706-kube-api-access-gjjlw\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.103959 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.603933874 +0000 UTC m=+149.124300178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.104796 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.106948 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.123571 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.129851 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.206005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-utilities\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.206270 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-catalog-content\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.206313 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.206372 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjjlw\" (UniqueName: \"kubernetes.io/projected/4fdec671-8c4f-4814-80f3-eb6580b4a706-kube-api-access-gjjlw\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.207224 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-utilities\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.207421 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-catalog-content\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.207623 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.707611934 +0000 UTC m=+149.227978218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.214650 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.228182 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.242130 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.249117 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.271494 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7gvg"] Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.273475 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.303399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjjlw\" (UniqueName: \"kubernetes.io/projected/4fdec671-8c4f-4814-80f3-eb6580b4a706-kube-api-access-gjjlw\") pod \"community-operators-dmllb\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.304099 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7gvg"] Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.308238 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.308452 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-catalog-content\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.308505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqdj7\" (UniqueName: \"kubernetes.io/projected/90ef31dc-afaa-4644-a842-43b67375e125-kube-api-access-gqdj7\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.308566 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-utilities\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.308677 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.808659404 +0000 UTC m=+149.329025688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.316361 4820 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-jc9mp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.316423 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" podUID="91004505-bf59-410e-831a-62e980857994" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.376479 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.415538 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k57sd" event={"ID":"47bf7210-2d6d-484a-bac5-3847ea568287","Type":"ContainerStarted","Data":"bf57c70899ef944c82e709c0ad4ea306821d80c78f7fde56f4828eebd91cf2f0"} Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.415592 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k57sd" event={"ID":"47bf7210-2d6d-484a-bac5-3847ea568287","Type":"ContainerStarted","Data":"0d282dedbc6b603284fa4f0de686b47d36b77338ff766a5191cdc9a476b7ab6d"} Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.422677 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-utilities\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.422722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-catalog-content\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.422749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.422776 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqdj7\" (UniqueName: \"kubernetes.io/projected/90ef31dc-afaa-4644-a842-43b67375e125-kube-api-access-gqdj7\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.423308 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:27.923296936 +0000 UTC m=+149.443663220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.423725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-utilities\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.424843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-catalog-content\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.459532 4820 generic.go:334] "Generic (PLEG): container finished" podID="a6740627-6bd7-48f8-9dd8-ceccce34fc7f" containerID="e5ffadf679e3748ef24b06383df54de342b0be63fba86a3927aebe26fd368d18" exitCode=0 Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.460601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" event={"ID":"a6740627-6bd7-48f8-9dd8-ceccce34fc7f","Type":"ContainerDied","Data":"e5ffadf679e3748ef24b06383df54de342b0be63fba86a3927aebe26fd368d18"} Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.500265 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-k57sd" podStartSLOduration=10.500248775 podStartE2EDuration="10.500248775s" podCreationTimestamp="2026-02-01 14:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:27.499219088 +0000 UTC m=+149.019585372" watchObservedRunningTime="2026-02-01 14:23:27.500248775 +0000 UTC m=+149.020615059" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.521378 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqdj7\" (UniqueName: \"kubernetes.io/projected/90ef31dc-afaa-4644-a842-43b67375e125-kube-api-access-gqdj7\") pod \"certified-operators-d7gvg\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.525331 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.526398 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.026374641 +0000 UTC m=+149.546740925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.547997 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.550938 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.556318 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.558943 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.569955 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.632152 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.632206 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ed27d19-0a49-4510-8056-98c4a7b869b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ed27d19-0a49-4510-8056-98c4a7b869b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.632227 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ed27d19-0a49-4510-8056-98c4a7b869b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ed27d19-0a49-4510-8056-98c4a7b869b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.632509 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.132498117 +0000 UTC m=+149.652864401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.691767 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.701058 4820 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.730661 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:27 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:27 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:27 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.730992 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.736554 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.736934 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ed27d19-0a49-4510-8056-98c4a7b869b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ed27d19-0a49-4510-8056-98c4a7b869b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.736968 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ed27d19-0a49-4510-8056-98c4a7b869b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ed27d19-0a49-4510-8056-98c4a7b869b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.737460 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.237445451 +0000 UTC m=+149.757811735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.737499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ed27d19-0a49-4510-8056-98c4a7b869b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ed27d19-0a49-4510-8056-98c4a7b869b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.791592 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6kqz"] Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.813603 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ed27d19-0a49-4510-8056-98c4a7b869b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ed27d19-0a49-4510-8056-98c4a7b869b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.840033 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.840333 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.340320969 +0000 UTC m=+149.860687253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: W0201 14:23:27.901899 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dee68b0_a47b_49fd_a889_7bf3bc58c380.slice/crio-fb04e785a253ab3ec846d5e4dee59e8f25fa63535ce3165519f2b5ea0cfd1cfa WatchSource:0}: Error finding container fb04e785a253ab3ec846d5e4dee59e8f25fa63535ce3165519f2b5ea0cfd1cfa: Status 404 returned error can't find the container with id fb04e785a253ab3ec846d5e4dee59e8f25fa63535ce3165519f2b5ea0cfd1cfa Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.940857 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:27 crc kubenswrapper[4820]: E0201 14:23:27.941221 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.441206675 +0000 UTC m=+149.961572959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:27 crc kubenswrapper[4820]: I0201 14:23:27.994174 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.041977 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:28 crc kubenswrapper[4820]: E0201 14:23:28.042628 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.542614165 +0000 UTC m=+150.062980449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.144434 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:28 crc kubenswrapper[4820]: E0201 14:23:28.144641 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.644612331 +0000 UTC m=+150.164978625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.145004 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:28 crc kubenswrapper[4820]: E0201 14:23:28.145374 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.645362 +0000 UTC m=+150.165728284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.176273 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmllb"] Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.251816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:28 crc kubenswrapper[4820]: E0201 14:23:28.252142 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.752126232 +0000 UTC m=+150.272492516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.327764 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jc9mp" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.349833 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5dnc"] Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.355681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:28 crc kubenswrapper[4820]: E0201 14:23:28.356968 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 14:23:28.856956203 +0000 UTC m=+150.377322487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzchg" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.387607 4820 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-01T14:23:27.701085723Z","Handler":null,"Name":""} Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.392137 4820 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.392174 4820 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.456429 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.472147 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.476597 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7gvg"] Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.487772 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.495758 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dd494b84b37083a0778845c76b8006b670ea3454290b87242703b266105198ce"} Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.499165 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmllb" event={"ID":"4fdec671-8c4f-4814-80f3-eb6580b4a706","Type":"ContainerStarted","Data":"a356323a3713721c01f356c10d1a95d52791fa273040fed553104d74405a7db5"} Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.506541 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f059931396a053b1afa13d9e127190aa176673795f1d70076f5ff8aa2cce66b6"} Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.511600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a7888981bc36ad0ec6efbaac45107ec0991e718857769d31cc3cebe1ab3d3890"} Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.513264 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5dnc" event={"ID":"5cd3df7b-e150-490b-9785-ccfab6b264b5","Type":"ContainerStarted","Data":"c2e6593def29b6014178c79276b5a3c799ccbdf439f4b890fa51a5c9c9d147d2"} Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.514436 4820 generic.go:334] "Generic (PLEG): container finished" podID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerID="df26604093974defe4e59716e198e227f60977d274340e979b3799fdba725624" exitCode=0 Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.514517 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6kqz" event={"ID":"3dee68b0-a47b-49fd-a889-7bf3bc58c380","Type":"ContainerDied","Data":"df26604093974defe4e59716e198e227f60977d274340e979b3799fdba725624"} Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.514594 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6kqz" event={"ID":"3dee68b0-a47b-49fd-a889-7bf3bc58c380","Type":"ContainerStarted","Data":"fb04e785a253ab3ec846d5e4dee59e8f25fa63535ce3165519f2b5ea0cfd1cfa"} Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.519540 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.558787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.562484 4820 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.562674 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.596454 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzchg\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.685135 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.751141 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:28 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:28 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:28 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.751572 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.814000 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.830388 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkfmb"] Feb 01 14:23:28 crc kubenswrapper[4820]: E0201 14:23:28.830978 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6740627-6bd7-48f8-9dd8-ceccce34fc7f" containerName="collect-profiles" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.830990 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6740627-6bd7-48f8-9dd8-ceccce34fc7f" containerName="collect-profiles" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.831157 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6740627-6bd7-48f8-9dd8-ceccce34fc7f" containerName="collect-profiles" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.832483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.838267 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.864493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg7q9\" (UniqueName: \"kubernetes.io/projected/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-kube-api-access-pg7q9\") pod \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.864584 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-secret-volume\") pod \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.864633 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-config-volume\") pod \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\" (UID: \"a6740627-6bd7-48f8-9dd8-ceccce34fc7f\") " Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.864979 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-utilities\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.869013 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6740627-6bd7-48f8-9dd8-ceccce34fc7f" (UID: "a6740627-6bd7-48f8-9dd8-ceccce34fc7f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.870021 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-catalog-content\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.870165 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9nv5\" (UniqueName: \"kubernetes.io/projected/f73b6fb9-8420-42fe-9b3d-42d17a204743-kube-api-access-g9nv5\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.870297 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.874910 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-kube-api-access-pg7q9" (OuterVolumeSpecName: "kube-api-access-pg7q9") pod "a6740627-6bd7-48f8-9dd8-ceccce34fc7f" (UID: "a6740627-6bd7-48f8-9dd8-ceccce34fc7f"). InnerVolumeSpecName "kube-api-access-pg7q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.875302 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6740627-6bd7-48f8-9dd8-ceccce34fc7f" (UID: "a6740627-6bd7-48f8-9dd8-ceccce34fc7f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.923994 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkfmb"] Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.971168 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9nv5\" (UniqueName: \"kubernetes.io/projected/f73b6fb9-8420-42fe-9b3d-42d17a204743-kube-api-access-g9nv5\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.971250 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-utilities\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.971305 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-catalog-content\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.971348 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.971363 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg7q9\" (UniqueName: \"kubernetes.io/projected/a6740627-6bd7-48f8-9dd8-ceccce34fc7f-kube-api-access-pg7q9\") on node \"crc\" DevicePath \"\"" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.971802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-catalog-content\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:28 crc kubenswrapper[4820]: I0201 14:23:28.972463 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-utilities\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.012475 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9nv5\" (UniqueName: \"kubernetes.io/projected/f73b6fb9-8420-42fe-9b3d-42d17a204743-kube-api-access-g9nv5\") pod \"redhat-marketplace-vkfmb\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.166381 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.216641 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.217331 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzchg"] Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.261228 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nswhf"] Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.265955 4820 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gwgjr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]log ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]etcd ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/generic-apiserver-start-informers ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/max-in-flight-filter ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 01 14:23:29 crc kubenswrapper[4820]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/project.openshift.io-projectcache ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-startinformers ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 01 14:23:29 crc kubenswrapper[4820]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 01 14:23:29 crc kubenswrapper[4820]: livez check failed Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.266019 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" podUID="e8c3f194-8abb-4c41-8418-169b11d6afd2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.270703 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.273699 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-utilities\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.273741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-catalog-content\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.273806 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktn7n\" (UniqueName: \"kubernetes.io/projected/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-kube-api-access-ktn7n\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.286639 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nswhf"] Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.289336 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.300264 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dgds8" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.323074 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.374601 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-utilities\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.374653 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-catalog-content\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.374765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktn7n\" (UniqueName: \"kubernetes.io/projected/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-kube-api-access-ktn7n\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.376160 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-utilities\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.376468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-catalog-content\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.377504 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.377527 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.379678 4820 patch_prober.go:28] interesting pod/console-f9d7485db-j7nmb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.379715 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j7nmb" podUID="ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.415798 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktn7n\" (UniqueName: \"kubernetes.io/projected/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-kube-api-access-ktn7n\") pod \"redhat-marketplace-nswhf\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.484602 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkfmb"] Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.524468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6ed27d19-0a49-4510-8056-98c4a7b869b7","Type":"ContainerStarted","Data":"e07f350633bfd0ac051166e3ffd0fa21c6d42d5e5161628ca13aed57129617fc"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.524517 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6ed27d19-0a49-4510-8056-98c4a7b869b7","Type":"ContainerStarted","Data":"8b8324213a91914d274c22461f2923ce351ee810579ca4a797c5cdb8d0456054"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.526727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b496b5870f278b9f9debf043b259b5a0a1fc25d7d02f6d68b39d9179444b7b3e"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.530946 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"db522e586bf86f574ea8e202a79ac6388ee78e2ee46e9d12c9197d4c16ba2eb9"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.531151 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.534314 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.535065 4820 generic.go:334] "Generic (PLEG): container finished" podID="90ef31dc-afaa-4644-a842-43b67375e125" containerID="4ef44a45b3bf6a913f4391983f489b4b0067ec80492123b3b77aec6545f0e313" exitCode=0 Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.535115 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7gvg" event={"ID":"90ef31dc-afaa-4644-a842-43b67375e125","Type":"ContainerDied","Data":"4ef44a45b3bf6a913f4391983f489b4b0067ec80492123b3b77aec6545f0e313"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.535131 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7gvg" event={"ID":"90ef31dc-afaa-4644-a842-43b67375e125","Type":"ContainerStarted","Data":"00a961cadc177ea2ea59efa9c827b6cdae03617bfd0407912dd64be48023a78c"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.539935 4820 generic.go:334] "Generic (PLEG): container finished" podID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerID="0ec2f65ff6560662931de6e6881d5bb909de54a157b8b49c9c8b8f1ae409f627" exitCode=0 Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.539976 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5dnc" event={"ID":"5cd3df7b-e150-490b-9785-ccfab6b264b5","Type":"ContainerDied","Data":"0ec2f65ff6560662931de6e6881d5bb909de54a157b8b49c9c8b8f1ae409f627"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.542593 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" event={"ID":"e180d67d-fdb1-4874-a793-abe25452fe6d","Type":"ContainerStarted","Data":"8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.542616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" event={"ID":"e180d67d-fdb1-4874-a793-abe25452fe6d","Type":"ContainerStarted","Data":"344664dba510c7fa1121b6a5af9f9eb2fe5509a562d91517cce79cc71a7860ff"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.543173 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.547495 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.547477468 podStartE2EDuration="2.547477468s" podCreationTimestamp="2026-02-01 14:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:29.537327018 +0000 UTC m=+151.057693302" watchObservedRunningTime="2026-02-01 14:23:29.547477468 +0000 UTC m=+151.067843752" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.547554 4820 generic.go:334] "Generic (PLEG): container finished" podID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerID="7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6" exitCode=0 Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.547570 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmllb" event={"ID":"4fdec671-8c4f-4814-80f3-eb6580b4a706","Type":"ContainerDied","Data":"7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.551729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" event={"ID":"a6740627-6bd7-48f8-9dd8-ceccce34fc7f","Type":"ContainerDied","Data":"c3b30b159dbefa4fd4062212a086136c56412c66b2552c062e1b223861893761"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.551850 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b30b159dbefa4fd4062212a086136c56412c66b2552c062e1b223861893761" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.552119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.553650 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkfmb" event={"ID":"f73b6fb9-8420-42fe-9b3d-42d17a204743","Type":"ContainerStarted","Data":"9409288cc34cede2c1be5b4412688e80dbba779f2ac6c80f4a13b1922b0d5f7c"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.562536 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d597e95bb086a81f4e3bfec1d4487aa3324dc29cc88fcf710222b7a0c1cfb0c0"} Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.598694 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.624225 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.635259 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2b7xd" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.661137 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.661178 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.661550 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.661693 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.724475 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.727222 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" podStartSLOduration=130.727207813 podStartE2EDuration="2m10.727207813s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:29.714760632 +0000 UTC m=+151.235126916" watchObservedRunningTime="2026-02-01 14:23:29.727207813 +0000 UTC m=+151.247574097" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.735058 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:29 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:29 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:29 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.735134 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.831524 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.832354 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.837560 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.837665 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.846200 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jd9wc"] Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.847454 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.850449 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.855735 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.858778 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jd9wc"] Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.905718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33866eae-2171-4a4b-9457-96dfc939edba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"33866eae-2171-4a4b-9457-96dfc939edba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.905766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvm4m\" (UniqueName: \"kubernetes.io/projected/df4ea2d8-4be0-4e30-b48a-484a93d725b0-kube-api-access-tvm4m\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.905787 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33866eae-2171-4a4b-9457-96dfc939edba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"33866eae-2171-4a4b-9457-96dfc939edba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.905828 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-catalog-content\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.905845 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-utilities\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:29 crc kubenswrapper[4820]: I0201 14:23:29.945034 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nswhf"] Feb 01 14:23:29 crc kubenswrapper[4820]: W0201 14:23:29.959330 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26f893e7_b0f2_4d0e_abfe_ecb9fd67fe1b.slice/crio-bf182e8b14900be19e41ae59c11d8b892501377bfb169a2bd205b53741a86427 WatchSource:0}: Error finding container bf182e8b14900be19e41ae59c11d8b892501377bfb169a2bd205b53741a86427: Status 404 returned error can't find the container with id bf182e8b14900be19e41ae59c11d8b892501377bfb169a2bd205b53741a86427 Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.006869 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvm4m\" (UniqueName: \"kubernetes.io/projected/df4ea2d8-4be0-4e30-b48a-484a93d725b0-kube-api-access-tvm4m\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.007267 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33866eae-2171-4a4b-9457-96dfc939edba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"33866eae-2171-4a4b-9457-96dfc939edba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.008138 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33866eae-2171-4a4b-9457-96dfc939edba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"33866eae-2171-4a4b-9457-96dfc939edba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.009951 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-catalog-content\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.010016 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-utilities\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.010106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33866eae-2171-4a4b-9457-96dfc939edba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"33866eae-2171-4a4b-9457-96dfc939edba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.010533 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-catalog-content\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.010720 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-utilities\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.017754 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8mbbr"] Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.018806 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.038254 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8mbbr"] Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.068832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33866eae-2171-4a4b-9457-96dfc939edba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"33866eae-2171-4a4b-9457-96dfc939edba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.069028 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvm4m\" (UniqueName: \"kubernetes.io/projected/df4ea2d8-4be0-4e30-b48a-484a93d725b0-kube-api-access-tvm4m\") pod \"redhat-operators-jd9wc\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.110864 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-utilities\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.110966 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxp4l\" (UniqueName: \"kubernetes.io/projected/382873c4-83aa-4693-9eb8-7b1f41b0f22b-kube-api-access-pxp4l\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.111100 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-catalog-content\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.203830 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.212334 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-catalog-content\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.212378 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-utilities\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.212398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxp4l\" (UniqueName: \"kubernetes.io/projected/382873c4-83aa-4693-9eb8-7b1f41b0f22b-kube-api-access-pxp4l\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.213177 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-catalog-content\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.214627 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-utilities\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.220386 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.246598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxp4l\" (UniqueName: \"kubernetes.io/projected/382873c4-83aa-4693-9eb8-7b1f41b0f22b-kube-api-access-pxp4l\") pod \"redhat-operators-8mbbr\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.395663 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.455410 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 14:23:30 crc kubenswrapper[4820]: W0201 14:23:30.466743 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod33866eae_2171_4a4b_9457_96dfc939edba.slice/crio-99fb2ed18200aae87af8eb82267870071954c399a1126ec2ea515e75f5c1a6ad WatchSource:0}: Error finding container 99fb2ed18200aae87af8eb82267870071954c399a1126ec2ea515e75f5c1a6ad: Status 404 returned error can't find the container with id 99fb2ed18200aae87af8eb82267870071954c399a1126ec2ea515e75f5c1a6ad Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.472164 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jd9wc"] Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.472388 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.575557 4820 generic.go:334] "Generic (PLEG): container finished" podID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerID="389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5" exitCode=0 Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.575814 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nswhf" event={"ID":"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b","Type":"ContainerDied","Data":"389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5"} Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.575915 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nswhf" event={"ID":"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b","Type":"ContainerStarted","Data":"bf182e8b14900be19e41ae59c11d8b892501377bfb169a2bd205b53741a86427"} Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.577468 4820 generic.go:334] "Generic (PLEG): container finished" podID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerID="cb4f45c86e1fdd49847a9fe55ff2951958d50e87766a303ba12e1b4445709985" exitCode=0 Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.577536 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkfmb" event={"ID":"f73b6fb9-8420-42fe-9b3d-42d17a204743","Type":"ContainerDied","Data":"cb4f45c86e1fdd49847a9fe55ff2951958d50e87766a303ba12e1b4445709985"} Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.580125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6ed27d19-0a49-4510-8056-98c4a7b869b7","Type":"ContainerDied","Data":"e07f350633bfd0ac051166e3ffd0fa21c6d42d5e5161628ca13aed57129617fc"} Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.580335 4820 generic.go:334] "Generic (PLEG): container finished" podID="6ed27d19-0a49-4510-8056-98c4a7b869b7" containerID="e07f350633bfd0ac051166e3ffd0fa21c6d42d5e5161628ca13aed57129617fc" exitCode=0 Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.584617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"33866eae-2171-4a4b-9457-96dfc939edba","Type":"ContainerStarted","Data":"99fb2ed18200aae87af8eb82267870071954c399a1126ec2ea515e75f5c1a6ad"} Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.712538 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8mbbr"] Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.729701 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:30 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:30 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:30 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:30 crc kubenswrapper[4820]: I0201 14:23:30.729741 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:30 crc kubenswrapper[4820]: W0201 14:23:30.787899 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod382873c4_83aa_4693_9eb8_7b1f41b0f22b.slice/crio-52492a1310392e18f482ded28d1d5c84cbb9b2fcb9b4991a2c841bfdf4cc232a WatchSource:0}: Error finding container 52492a1310392e18f482ded28d1d5c84cbb9b2fcb9b4991a2c841bfdf4cc232a: Status 404 returned error can't find the container with id 52492a1310392e18f482ded28d1d5c84cbb9b2fcb9b4991a2c841bfdf4cc232a Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.596465 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"33866eae-2171-4a4b-9457-96dfc939edba","Type":"ContainerStarted","Data":"da4d2b7ef54bd22ec0f2e7540d478410a14c899febed70d5c123d5acc2eb0c1d"} Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.600350 4820 generic.go:334] "Generic (PLEG): container finished" podID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerID="9497117ec84edfbe557be95f5b0c71bef28dc622147ef5f01e4a4afd3948055d" exitCode=0 Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.600389 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mbbr" event={"ID":"382873c4-83aa-4693-9eb8-7b1f41b0f22b","Type":"ContainerDied","Data":"9497117ec84edfbe557be95f5b0c71bef28dc622147ef5f01e4a4afd3948055d"} Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.600453 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mbbr" event={"ID":"382873c4-83aa-4693-9eb8-7b1f41b0f22b","Type":"ContainerStarted","Data":"52492a1310392e18f482ded28d1d5c84cbb9b2fcb9b4991a2c841bfdf4cc232a"} Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.602541 4820 generic.go:334] "Generic (PLEG): container finished" podID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerID="8187f366604b4eb05d3b122a600770b61fa44c20cc56ad80d79b2ddd2869babe" exitCode=0 Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.602610 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jd9wc" event={"ID":"df4ea2d8-4be0-4e30-b48a-484a93d725b0","Type":"ContainerDied","Data":"8187f366604b4eb05d3b122a600770b61fa44c20cc56ad80d79b2ddd2869babe"} Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.602647 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jd9wc" event={"ID":"df4ea2d8-4be0-4e30-b48a-484a93d725b0","Type":"ContainerStarted","Data":"31b9502ab198a2b96078dcac540311b21c66e9211966a891aaab58b2787078cc"} Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.612957 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.612921747 podStartE2EDuration="2.612921747s" podCreationTimestamp="2026-02-01 14:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:23:31.60892222 +0000 UTC m=+153.129288514" watchObservedRunningTime="2026-02-01 14:23:31.612921747 +0000 UTC m=+153.133288031" Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.724540 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:31 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:31 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:31 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:31 crc kubenswrapper[4820]: I0201 14:23:31.724594 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.008427 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.152583 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ed27d19-0a49-4510-8056-98c4a7b869b7-kubelet-dir\") pod \"6ed27d19-0a49-4510-8056-98c4a7b869b7\" (UID: \"6ed27d19-0a49-4510-8056-98c4a7b869b7\") " Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.152705 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ed27d19-0a49-4510-8056-98c4a7b869b7-kube-api-access\") pod \"6ed27d19-0a49-4510-8056-98c4a7b869b7\" (UID: \"6ed27d19-0a49-4510-8056-98c4a7b869b7\") " Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.152716 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed27d19-0a49-4510-8056-98c4a7b869b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6ed27d19-0a49-4510-8056-98c4a7b869b7" (UID: "6ed27d19-0a49-4510-8056-98c4a7b869b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.153123 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ed27d19-0a49-4510-8056-98c4a7b869b7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.161977 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed27d19-0a49-4510-8056-98c4a7b869b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6ed27d19-0a49-4510-8056-98c4a7b869b7" (UID: "6ed27d19-0a49-4510-8056-98c4a7b869b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.254361 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ed27d19-0a49-4510-8056-98c4a7b869b7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.643310 4820 generic.go:334] "Generic (PLEG): container finished" podID="33866eae-2171-4a4b-9457-96dfc939edba" containerID="da4d2b7ef54bd22ec0f2e7540d478410a14c899febed70d5c123d5acc2eb0c1d" exitCode=0 Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.643484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"33866eae-2171-4a4b-9457-96dfc939edba","Type":"ContainerDied","Data":"da4d2b7ef54bd22ec0f2e7540d478410a14c899febed70d5c123d5acc2eb0c1d"} Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.648394 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6ed27d19-0a49-4510-8056-98c4a7b869b7","Type":"ContainerDied","Data":"8b8324213a91914d274c22461f2923ce351ee810579ca4a797c5cdb8d0456054"} Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.648417 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8324213a91914d274c22461f2923ce351ee810579ca4a797c5cdb8d0456054" Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.648460 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.726212 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:32 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:32 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:32 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:32 crc kubenswrapper[4820]: I0201 14:23:32.726272 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:33 crc kubenswrapper[4820]: I0201 14:23:33.725264 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:33 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:33 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:33 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:33 crc kubenswrapper[4820]: I0201 14:23:33.725331 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:33 crc kubenswrapper[4820]: I0201 14:23:33.946291 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.000783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33866eae-2171-4a4b-9457-96dfc939edba-kube-api-access\") pod \"33866eae-2171-4a4b-9457-96dfc939edba\" (UID: \"33866eae-2171-4a4b-9457-96dfc939edba\") " Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.001246 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33866eae-2171-4a4b-9457-96dfc939edba-kubelet-dir\") pod \"33866eae-2171-4a4b-9457-96dfc939edba\" (UID: \"33866eae-2171-4a4b-9457-96dfc939edba\") " Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.001784 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33866eae-2171-4a4b-9457-96dfc939edba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "33866eae-2171-4a4b-9457-96dfc939edba" (UID: "33866eae-2171-4a4b-9457-96dfc939edba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.024078 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33866eae-2171-4a4b-9457-96dfc939edba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "33866eae-2171-4a4b-9457-96dfc939edba" (UID: "33866eae-2171-4a4b-9457-96dfc939edba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.106084 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33866eae-2171-4a4b-9457-96dfc939edba-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.106123 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33866eae-2171-4a4b-9457-96dfc939edba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.258829 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.269170 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gwgjr" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.676708 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.677347 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"33866eae-2171-4a4b-9457-96dfc939edba","Type":"ContainerDied","Data":"99fb2ed18200aae87af8eb82267870071954c399a1126ec2ea515e75f5c1a6ad"} Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.677382 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99fb2ed18200aae87af8eb82267870071954c399a1126ec2ea515e75f5c1a6ad" Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.725582 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:34 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:34 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:34 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:34 crc kubenswrapper[4820]: I0201 14:23:34.725644 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:35 crc kubenswrapper[4820]: I0201 14:23:35.224497 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hkggw" Feb 01 14:23:35 crc kubenswrapper[4820]: I0201 14:23:35.724353 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:35 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:35 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:35 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:35 crc kubenswrapper[4820]: I0201 14:23:35.724409 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:36 crc kubenswrapper[4820]: I0201 14:23:36.395040 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:23:36 crc kubenswrapper[4820]: I0201 14:23:36.730529 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:36 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:36 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:36 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:36 crc kubenswrapper[4820]: I0201 14:23:36.730596 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:37 crc kubenswrapper[4820]: I0201 14:23:37.725176 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:37 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:37 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:37 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:37 crc kubenswrapper[4820]: I0201 14:23:37.725240 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:38 crc kubenswrapper[4820]: I0201 14:23:38.723896 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:38 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:38 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:38 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:38 crc kubenswrapper[4820]: I0201 14:23:38.723964 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:39 crc kubenswrapper[4820]: I0201 14:23:39.377625 4820 patch_prober.go:28] interesting pod/console-f9d7485db-j7nmb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 01 14:23:39 crc kubenswrapper[4820]: I0201 14:23:39.377953 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j7nmb" podUID="ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 01 14:23:39 crc kubenswrapper[4820]: I0201 14:23:39.662053 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:39 crc kubenswrapper[4820]: I0201 14:23:39.662069 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:39 crc kubenswrapper[4820]: I0201 14:23:39.662100 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:39 crc kubenswrapper[4820]: I0201 14:23:39.662144 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:39 crc kubenswrapper[4820]: I0201 14:23:39.724867 4820 patch_prober.go:28] interesting pod/router-default-5444994796-5rtqk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 14:23:39 crc kubenswrapper[4820]: [-]has-synced failed: reason withheld Feb 01 14:23:39 crc kubenswrapper[4820]: [+]process-running ok Feb 01 14:23:39 crc kubenswrapper[4820]: healthz check failed Feb 01 14:23:39 crc kubenswrapper[4820]: I0201 14:23:39.724931 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rtqk" podUID="040a4a57-4d8b-4209-8330-a0e2f3195384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 14:23:40 crc kubenswrapper[4820]: I0201 14:23:40.740262 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:40 crc kubenswrapper[4820]: I0201 14:23:40.742845 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5rtqk" Feb 01 14:23:41 crc kubenswrapper[4820]: I0201 14:23:41.759225 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:41 crc kubenswrapper[4820]: I0201 14:23:41.775431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8befd56b-2ebe-48c7-9027-4f906b2e09d5-metrics-certs\") pod \"network-metrics-daemon-dj7sg\" (UID: \"8befd56b-2ebe-48c7-9027-4f906b2e09d5\") " pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:41 crc kubenswrapper[4820]: I0201 14:23:41.960754 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dj7sg" Feb 01 14:23:47 crc kubenswrapper[4820]: I0201 14:23:47.705121 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jzsbs"] Feb 01 14:23:47 crc kubenswrapper[4820]: I0201 14:23:47.705937 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" containerID="cri-o://71646fa262d84e8e47f34000feeeeb0f34845964dc4ae9f28d63a2b46058227b" gracePeriod=30 Feb 01 14:23:47 crc kubenswrapper[4820]: I0201 14:23:47.709908 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545"] Feb 01 14:23:47 crc kubenswrapper[4820]: I0201 14:23:47.710135 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" containerID="cri-o://d1c47ac58b8769e4792389be6a922ea590c86e82e34a5619e49f67620066ceea" gracePeriod=30 Feb 01 14:23:48 crc kubenswrapper[4820]: I0201 14:23:48.695734 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.242953 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.243014 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.309680 4820 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jzsbs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.309777 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.388308 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.393154 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.527474 4820 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-95545 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.527530 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.659977 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.660045 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.660087 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.659986 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.660127 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.660584 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.660639 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9aab66572e67af8077497567eb1e0b420b0816eb5188a7b9291b04d10be2dbdb"} pod="openshift-console/downloads-7954f5f757-m7jf2" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.660726 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" containerID="cri-o://9aab66572e67af8077497567eb1e0b420b0816eb5188a7b9291b04d10be2dbdb" gracePeriod=2 Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.660702 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.795758 4820 generic.go:334] "Generic (PLEG): container finished" podID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerID="d1c47ac58b8769e4792389be6a922ea590c86e82e34a5619e49f67620066ceea" exitCode=0 Feb 01 14:23:49 crc kubenswrapper[4820]: I0201 14:23:49.795917 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" event={"ID":"df0df42a-d0cc-4564-856d-a0d3ace0021f","Type":"ContainerDied","Data":"d1c47ac58b8769e4792389be6a922ea590c86e82e34a5619e49f67620066ceea"} Feb 01 14:23:50 crc kubenswrapper[4820]: I0201 14:23:50.800889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" event={"ID":"8c2d6da6-c397-4077-81b1-d5b492811214","Type":"ContainerDied","Data":"71646fa262d84e8e47f34000feeeeb0f34845964dc4ae9f28d63a2b46058227b"} Feb 01 14:23:50 crc kubenswrapper[4820]: I0201 14:23:50.800866 4820 generic.go:334] "Generic (PLEG): container finished" podID="8c2d6da6-c397-4077-81b1-d5b492811214" containerID="71646fa262d84e8e47f34000feeeeb0f34845964dc4ae9f28d63a2b46058227b" exitCode=0 Feb 01 14:23:59 crc kubenswrapper[4820]: I0201 14:23:59.311271 4820 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jzsbs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 01 14:23:59 crc kubenswrapper[4820]: I0201 14:23:59.312169 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 01 14:23:59 crc kubenswrapper[4820]: I0201 14:23:59.528364 4820 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-95545 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 01 14:23:59 crc kubenswrapper[4820]: I0201 14:23:59.528444 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 01 14:23:59 crc kubenswrapper[4820]: I0201 14:23:59.660843 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:23:59 crc kubenswrapper[4820]: I0201 14:23:59.660983 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:24:00 crc kubenswrapper[4820]: I0201 14:24:00.164706 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbqv9" Feb 01 14:24:05 crc kubenswrapper[4820]: I0201 14:24:05.900505 4820 generic.go:334] "Generic (PLEG): container finished" podID="3449e7b8-24df-4789-959f-4ac101303cc2" containerID="9aab66572e67af8077497567eb1e0b420b0816eb5188a7b9291b04d10be2dbdb" exitCode=0 Feb 01 14:24:05 crc kubenswrapper[4820]: I0201 14:24:05.900586 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m7jf2" event={"ID":"3449e7b8-24df-4789-959f-4ac101303cc2","Type":"ContainerDied","Data":"9aab66572e67af8077497567eb1e0b420b0816eb5188a7b9291b04d10be2dbdb"} Feb 01 14:24:07 crc kubenswrapper[4820]: I0201 14:24:07.324726 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.601043 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 14:24:08 crc kubenswrapper[4820]: E0201 14:24:08.601292 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33866eae-2171-4a4b-9457-96dfc939edba" containerName="pruner" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.601306 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="33866eae-2171-4a4b-9457-96dfc939edba" containerName="pruner" Feb 01 14:24:08 crc kubenswrapper[4820]: E0201 14:24:08.601324 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed27d19-0a49-4510-8056-98c4a7b869b7" containerName="pruner" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.601332 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed27d19-0a49-4510-8056-98c4a7b869b7" containerName="pruner" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.601450 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="33866eae-2171-4a4b-9457-96dfc939edba" containerName="pruner" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.601463 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed27d19-0a49-4510-8056-98c4a7b869b7" containerName="pruner" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.601898 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.604468 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.605859 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.624857 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.697753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.697810 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:08 crc kubenswrapper[4820]: E0201 14:24:08.742657 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 01 14:24:08 crc kubenswrapper[4820]: E0201 14:24:08.742915 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqdj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d7gvg_openshift-marketplace(90ef31dc-afaa-4644-a842-43b67375e125): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 14:24:08 crc kubenswrapper[4820]: E0201 14:24:08.744132 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d7gvg" podUID="90ef31dc-afaa-4644-a842-43b67375e125" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.798696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.798783 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.798811 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.839255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:08 crc kubenswrapper[4820]: I0201 14:24:08.917749 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:09 crc kubenswrapper[4820]: I0201 14:24:09.661411 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:24:09 crc kubenswrapper[4820]: I0201 14:24:09.661826 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:24:10 crc kubenswrapper[4820]: E0201 14:24:10.163043 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d7gvg" podUID="90ef31dc-afaa-4644-a842-43b67375e125" Feb 01 14:24:10 crc kubenswrapper[4820]: E0201 14:24:10.210072 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 01 14:24:10 crc kubenswrapper[4820]: E0201 14:24:10.210218 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46s25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z5dnc_openshift-marketplace(5cd3df7b-e150-490b-9785-ccfab6b264b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 14:24:10 crc kubenswrapper[4820]: E0201 14:24:10.211530 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-z5dnc" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" Feb 01 14:24:10 crc kubenswrapper[4820]: E0201 14:24:10.229698 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 01 14:24:10 crc kubenswrapper[4820]: E0201 14:24:10.229848 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sm5ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-p6kqz_openshift-marketplace(3dee68b0-a47b-49fd-a889-7bf3bc58c380): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 14:24:10 crc kubenswrapper[4820]: E0201 14:24:10.231207 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-p6kqz" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" Feb 01 14:24:10 crc kubenswrapper[4820]: I0201 14:24:10.309255 4820 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jzsbs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:24:10 crc kubenswrapper[4820]: I0201 14:24:10.309318 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 01 14:24:10 crc kubenswrapper[4820]: I0201 14:24:10.527351 4820 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-95545 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:24:10 crc kubenswrapper[4820]: I0201 14:24:10.527671 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.708823 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-p6kqz" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.708839 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5dnc" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.798965 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.799153 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktn7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nswhf_openshift-marketplace(26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.800284 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nswhf" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.808240 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.808389 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjjlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dmllb_openshift-marketplace(4fdec671-8c4f-4814-80f3-eb6580b4a706): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.809604 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dmllb" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.876891 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.877057 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9nv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vkfmb_openshift-marketplace(f73b6fb9-8420-42fe-9b3d-42d17a204743): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 14:24:11 crc kubenswrapper[4820]: E0201 14:24:11.878297 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vkfmb" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.198838 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.200258 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.206294 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.277735 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b285186-eeb9-40d4-95cd-bc27968b969f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.277804 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-var-lock\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.277978 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.378971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b285186-eeb9-40d4-95cd-bc27968b969f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.379025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-var-lock\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.379084 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.379139 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.379431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-var-lock\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.400365 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b285186-eeb9-40d4-95cd-bc27968b969f-kube-api-access\") pod \"installer-9-crc\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:14 crc kubenswrapper[4820]: I0201 14:24:14.525392 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.109518 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nswhf" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.109549 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vkfmb" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.109556 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dmllb" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.218573 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.219026 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvm4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jd9wc_openshift-marketplace(df4ea2d8-4be0-4e30-b48a-484a93d725b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.220409 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jd9wc" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.227060 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.227223 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxp4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8mbbr_openshift-marketplace(382873c4-83aa-4693-9eb8-7b1f41b0f22b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.228406 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8mbbr" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.246726 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.279525 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c85bd9594-zstst"] Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.279781 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.279801 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.279957 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" containerName="controller-manager" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.280387 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.284607 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c85bd9594-zstst"] Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.305775 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2d6da6-c397-4077-81b1-d5b492811214-serving-cert\") pod \"8c2d6da6-c397-4077-81b1-d5b492811214\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.305851 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-proxy-ca-bundles\") pod \"8c2d6da6-c397-4077-81b1-d5b492811214\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.305895 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-config\") pod \"8c2d6da6-c397-4077-81b1-d5b492811214\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306145 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8h7r\" (UniqueName: \"kubernetes.io/projected/8c2d6da6-c397-4077-81b1-d5b492811214-kube-api-access-d8h7r\") pod \"8c2d6da6-c397-4077-81b1-d5b492811214\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306170 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-client-ca\") pod \"8c2d6da6-c397-4077-81b1-d5b492811214\" (UID: \"8c2d6da6-c397-4077-81b1-d5b492811214\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306387 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h7wf\" (UniqueName: \"kubernetes.io/projected/f705a784-9e52-4237-9b15-c7f61ff9ed29-kube-api-access-6h7wf\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-proxy-ca-bundles\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306506 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-config\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705a784-9e52-4237-9b15-c7f61ff9ed29-serving-cert\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306607 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-client-ca\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306614 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8c2d6da6-c397-4077-81b1-d5b492811214" (UID: "8c2d6da6-c397-4077-81b1-d5b492811214"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.306868 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-config" (OuterVolumeSpecName: "config") pod "8c2d6da6-c397-4077-81b1-d5b492811214" (UID: "8c2d6da6-c397-4077-81b1-d5b492811214"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.307047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c2d6da6-c397-4077-81b1-d5b492811214" (UID: "8c2d6da6-c397-4077-81b1-d5b492811214"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.317513 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2d6da6-c397-4077-81b1-d5b492811214-kube-api-access-d8h7r" (OuterVolumeSpecName: "kube-api-access-d8h7r") pod "8c2d6da6-c397-4077-81b1-d5b492811214" (UID: "8c2d6da6-c397-4077-81b1-d5b492811214"). InnerVolumeSpecName "kube-api-access-d8h7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.319131 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2d6da6-c397-4077-81b1-d5b492811214-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c2d6da6-c397-4077-81b1-d5b492811214" (UID: "8c2d6da6-c397-4077-81b1-d5b492811214"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.319365 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407369 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0df42a-d0cc-4564-856d-a0d3ace0021f-serving-cert\") pod \"df0df42a-d0cc-4564-856d-a0d3ace0021f\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407544 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-client-ca\") pod \"df0df42a-d0cc-4564-856d-a0d3ace0021f\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407565 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2gvl\" (UniqueName: \"kubernetes.io/projected/df0df42a-d0cc-4564-856d-a0d3ace0021f-kube-api-access-j2gvl\") pod \"df0df42a-d0cc-4564-856d-a0d3ace0021f\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407589 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-config\") pod \"df0df42a-d0cc-4564-856d-a0d3ace0021f\" (UID: \"df0df42a-d0cc-4564-856d-a0d3ace0021f\") " Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407731 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-config\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407758 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705a784-9e52-4237-9b15-c7f61ff9ed29-serving-cert\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-client-ca\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407825 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h7wf\" (UniqueName: \"kubernetes.io/projected/f705a784-9e52-4237-9b15-c7f61ff9ed29-kube-api-access-6h7wf\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-proxy-ca-bundles\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407909 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8h7r\" (UniqueName: \"kubernetes.io/projected/8c2d6da6-c397-4077-81b1-d5b492811214-kube-api-access-d8h7r\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407919 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407928 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2d6da6-c397-4077-81b1-d5b492811214-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407936 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.407944 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2d6da6-c397-4077-81b1-d5b492811214-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.409128 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-client-ca" (OuterVolumeSpecName: "client-ca") pod "df0df42a-d0cc-4564-856d-a0d3ace0021f" (UID: "df0df42a-d0cc-4564-856d-a0d3ace0021f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.409723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-proxy-ca-bundles\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.410167 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-client-ca\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.410623 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-config" (OuterVolumeSpecName: "config") pod "df0df42a-d0cc-4564-856d-a0d3ace0021f" (UID: "df0df42a-d0cc-4564-856d-a0d3ace0021f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.411905 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-config\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.414342 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0df42a-d0cc-4564-856d-a0d3ace0021f-kube-api-access-j2gvl" (OuterVolumeSpecName: "kube-api-access-j2gvl") pod "df0df42a-d0cc-4564-856d-a0d3ace0021f" (UID: "df0df42a-d0cc-4564-856d-a0d3ace0021f"). InnerVolumeSpecName "kube-api-access-j2gvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.414793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705a784-9e52-4237-9b15-c7f61ff9ed29-serving-cert\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.415226 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0df42a-d0cc-4564-856d-a0d3ace0021f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df0df42a-d0cc-4564-856d-a0d3ace0021f" (UID: "df0df42a-d0cc-4564-856d-a0d3ace0021f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.428197 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h7wf\" (UniqueName: \"kubernetes.io/projected/f705a784-9e52-4237-9b15-c7f61ff9ed29-kube-api-access-6h7wf\") pod \"controller-manager-5c85bd9594-zstst\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.472243 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.509569 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0df42a-d0cc-4564-856d-a0d3ace0021f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.509602 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2gvl\" (UniqueName: \"kubernetes.io/projected/df0df42a-d0cc-4564-856d-a0d3ace0021f-kube-api-access-j2gvl\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.509612 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.509621 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0df42a-d0cc-4564-856d-a0d3ace0021f-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.580067 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dj7sg"] Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.594578 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.634080 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.850702 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c85bd9594-zstst"] Feb 01 14:24:15 crc kubenswrapper[4820]: W0201 14:24:15.857243 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf705a784_9e52_4237_9b15_c7f61ff9ed29.slice/crio-3a285d04c70f43cdb7734f6e0f3878365ff37a3d8d223e9be705436e5080ba94 WatchSource:0}: Error finding container 3a285d04c70f43cdb7734f6e0f3878365ff37a3d8d223e9be705436e5080ba94: Status 404 returned error can't find the container with id 3a285d04c70f43cdb7734f6e0f3878365ff37a3d8d223e9be705436e5080ba94 Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.954508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m7jf2" event={"ID":"3449e7b8-24df-4789-959f-4ac101303cc2","Type":"ContainerStarted","Data":"31edb887331a967742737a97ecd9f79e7291215eca4a184d3369057b3027ab13"} Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.955036 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.955098 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.955181 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.955601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b285186-eeb9-40d4-95cd-bc27968b969f","Type":"ContainerStarted","Data":"bcd1366b3d33af59c3f342f2863f24682b0ecf6eb012726334b54e39297b8f3b"} Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.963816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0801e2ec-1a96-4dce-b86f-7f753b4150f4","Type":"ContainerStarted","Data":"625d84740592ca0e66d4489b25f748a76679c6e0661f8005249d5abb4bc441d5"} Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.965259 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.965682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jzsbs" event={"ID":"8c2d6da6-c397-4077-81b1-d5b492811214","Type":"ContainerDied","Data":"e4ceb009e44bb9008f77aeef278ea3f33c6bd4bcd6bfd88827e8baf8eab1a8aa"} Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.965734 4820 scope.go:117] "RemoveContainer" containerID="71646fa262d84e8e47f34000feeeeb0f34845964dc4ae9f28d63a2b46058227b" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.967210 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" event={"ID":"f705a784-9e52-4237-9b15-c7f61ff9ed29","Type":"ContainerStarted","Data":"3a285d04c70f43cdb7734f6e0f3878365ff37a3d8d223e9be705436e5080ba94"} Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.969327 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.970021 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545" event={"ID":"df0df42a-d0cc-4564-856d-a0d3ace0021f","Type":"ContainerDied","Data":"2f6a650bcebbc2dbc33f52c64a6cb99f6b991b92c4c1f8de3504cb48b900b0bc"} Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.971590 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" event={"ID":"8befd56b-2ebe-48c7-9027-4f906b2e09d5","Type":"ContainerStarted","Data":"409e8a6ac93b620a5d90afb13db23f34073fda24c75d29463816d071f913c4ff"} Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.972967 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8mbbr" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" Feb 01 14:24:15 crc kubenswrapper[4820]: E0201 14:24:15.973046 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jd9wc" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" Feb 01 14:24:15 crc kubenswrapper[4820]: I0201 14:24:15.996687 4820 scope.go:117] "RemoveContainer" containerID="d1c47ac58b8769e4792389be6a922ea590c86e82e34a5619e49f67620066ceea" Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.050865 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545"] Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.054244 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-95545"] Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.060549 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jzsbs"] Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.060610 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jzsbs"] Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.978342 4820 generic.go:334] "Generic (PLEG): container finished" podID="0801e2ec-1a96-4dce-b86f-7f753b4150f4" containerID="eb45d6da3b1354ea150f7f519efbfe826e9c2528c0e79c2000db6c7988de7aec" exitCode=0 Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.978380 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0801e2ec-1a96-4dce-b86f-7f753b4150f4","Type":"ContainerDied","Data":"eb45d6da3b1354ea150f7f519efbfe826e9c2528c0e79c2000db6c7988de7aec"} Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.981504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" event={"ID":"f705a784-9e52-4237-9b15-c7f61ff9ed29","Type":"ContainerStarted","Data":"ddb6a971db101b68662a47c8878ce6a323a89abbafc6f25b6267db07f6bd4e12"} Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.981703 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.985564 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.986641 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" event={"ID":"8befd56b-2ebe-48c7-9027-4f906b2e09d5","Type":"ContainerStarted","Data":"c3d3bc26b45bad5c8e53faa0747b0e8cb893d0a6b2f8bc1f8bb0478991aa1c7d"} Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.986671 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dj7sg" event={"ID":"8befd56b-2ebe-48c7-9027-4f906b2e09d5","Type":"ContainerStarted","Data":"4b3135a4630533dffdc86bb652dca18d45fbeec4451691f5c734560511afcbb9"} Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.988818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b285186-eeb9-40d4-95cd-bc27968b969f","Type":"ContainerStarted","Data":"049ad51645b7041dfd70c95610b9391fdac2b3663db2441a61a500c2aa81d081"} Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.989279 4820 patch_prober.go:28] interesting pod/downloads-7954f5f757-m7jf2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 01 14:24:16 crc kubenswrapper[4820]: I0201 14:24:16.989326 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m7jf2" podUID="3449e7b8-24df-4789-959f-4ac101303cc2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.009571 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.0095516 podStartE2EDuration="3.0095516s" podCreationTimestamp="2026-02-01 14:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:24:17.009369356 +0000 UTC m=+198.529735640" watchObservedRunningTime="2026-02-01 14:24:17.0095516 +0000 UTC m=+198.529917884" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.045774 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" podStartSLOduration=10.045757584 podStartE2EDuration="10.045757584s" podCreationTimestamp="2026-02-01 14:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:24:17.024123659 +0000 UTC m=+198.544489933" watchObservedRunningTime="2026-02-01 14:24:17.045757584 +0000 UTC m=+198.566123868" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.046774 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dj7sg" podStartSLOduration=178.046769702 podStartE2EDuration="2m58.046769702s" podCreationTimestamp="2026-02-01 14:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:24:17.043884145 +0000 UTC m=+198.564250429" watchObservedRunningTime="2026-02-01 14:24:17.046769702 +0000 UTC m=+198.567135986" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.206301 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2d6da6-c397-4077-81b1-d5b492811214" path="/var/lib/kubelet/pods/8c2d6da6-c397-4077-81b1-d5b492811214/volumes" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.206981 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" path="/var/lib/kubelet/pods/df0df42a-d0cc-4564-856d-a0d3ace0021f/volumes" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.596183 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6"] Feb 01 14:24:17 crc kubenswrapper[4820]: E0201 14:24:17.596469 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.596485 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.596576 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0df42a-d0cc-4564-856d-a0d3ace0021f" containerName="route-controller-manager" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.597009 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.600994 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.601461 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.601494 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.601589 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.601713 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.601832 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.612409 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6"] Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.635157 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j64lf\" (UniqueName: \"kubernetes.io/projected/335859ad-cb89-458a-9488-8afc994f9471-kube-api-access-j64lf\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.635254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-client-ca\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.635295 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-config\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.635318 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335859ad-cb89-458a-9488-8afc994f9471-serving-cert\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.736347 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j64lf\" (UniqueName: \"kubernetes.io/projected/335859ad-cb89-458a-9488-8afc994f9471-kube-api-access-j64lf\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.736442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-client-ca\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.736472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-config\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.736492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335859ad-cb89-458a-9488-8afc994f9471-serving-cert\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.737538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-client-ca\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.737900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-config\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.752148 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335859ad-cb89-458a-9488-8afc994f9471-serving-cert\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.752932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j64lf\" (UniqueName: \"kubernetes.io/projected/335859ad-cb89-458a-9488-8afc994f9471-kube-api-access-j64lf\") pod \"route-controller-manager-7888bd7874-kzrp6\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:17 crc kubenswrapper[4820]: I0201 14:24:17.913021 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.230805 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.300100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6"] Feb 01 14:24:18 crc kubenswrapper[4820]: W0201 14:24:18.339906 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod335859ad_cb89_458a_9488_8afc994f9471.slice/crio-6bdf8194527d0d7b30cf23963854874a4c4c66b7a63e13007d25cfd5339a78e0 WatchSource:0}: Error finding container 6bdf8194527d0d7b30cf23963854874a4c4c66b7a63e13007d25cfd5339a78e0: Status 404 returned error can't find the container with id 6bdf8194527d0d7b30cf23963854874a4c4c66b7a63e13007d25cfd5339a78e0 Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.351345 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kube-api-access\") pod \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\" (UID: \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\") " Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.351454 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kubelet-dir\") pod \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\" (UID: \"0801e2ec-1a96-4dce-b86f-7f753b4150f4\") " Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.351657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0801e2ec-1a96-4dce-b86f-7f753b4150f4" (UID: "0801e2ec-1a96-4dce-b86f-7f753b4150f4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.356344 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0801e2ec-1a96-4dce-b86f-7f753b4150f4" (UID: "0801e2ec-1a96-4dce-b86f-7f753b4150f4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.452469 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.452936 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0801e2ec-1a96-4dce-b86f-7f753b4150f4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.999190 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0801e2ec-1a96-4dce-b86f-7f753b4150f4","Type":"ContainerDied","Data":"625d84740592ca0e66d4489b25f748a76679c6e0661f8005249d5abb4bc441d5"} Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.999234 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="625d84740592ca0e66d4489b25f748a76679c6e0661f8005249d5abb4bc441d5" Feb 01 14:24:18 crc kubenswrapper[4820]: I0201 14:24:18.999246 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 14:24:19 crc kubenswrapper[4820]: I0201 14:24:19.000709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" event={"ID":"335859ad-cb89-458a-9488-8afc994f9471","Type":"ContainerStarted","Data":"e49215ebda3e46e1eb8c1d2ca294f3cfb1aba44ae11458560357a11fcda5634c"} Feb 01 14:24:19 crc kubenswrapper[4820]: I0201 14:24:19.000759 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" event={"ID":"335859ad-cb89-458a-9488-8afc994f9471","Type":"ContainerStarted","Data":"6bdf8194527d0d7b30cf23963854874a4c4c66b7a63e13007d25cfd5339a78e0"} Feb 01 14:24:19 crc kubenswrapper[4820]: I0201 14:24:19.001008 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:19 crc kubenswrapper[4820]: I0201 14:24:19.040388 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:19 crc kubenswrapper[4820]: I0201 14:24:19.058752 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" podStartSLOduration=12.058737784 podStartE2EDuration="12.058737784s" podCreationTimestamp="2026-02-01 14:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:24:19.026401245 +0000 UTC m=+200.546767529" watchObservedRunningTime="2026-02-01 14:24:19.058737784 +0000 UTC m=+200.579104068" Feb 01 14:24:19 crc kubenswrapper[4820]: I0201 14:24:19.242276 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:24:19 crc kubenswrapper[4820]: I0201 14:24:19.242340 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:24:19 crc kubenswrapper[4820]: I0201 14:24:19.665195 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-m7jf2" Feb 01 14:24:26 crc kubenswrapper[4820]: I0201 14:24:26.042764 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5dnc" event={"ID":"5cd3df7b-e150-490b-9785-ccfab6b264b5","Type":"ContainerStarted","Data":"21d572e11972efd7350d14c3194ae036d09b1f538e37ea2547ba5d5b6044602b"} Feb 01 14:24:26 crc kubenswrapper[4820]: I0201 14:24:26.044595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6kqz" event={"ID":"3dee68b0-a47b-49fd-a889-7bf3bc58c380","Type":"ContainerStarted","Data":"0a6b1c512e29dd838af3193ce4944c03b6998a5afd4de09cb5173753546de5d1"} Feb 01 14:24:26 crc kubenswrapper[4820]: I0201 14:24:26.046992 4820 generic.go:334] "Generic (PLEG): container finished" podID="90ef31dc-afaa-4644-a842-43b67375e125" containerID="753c5f81b0b4e335694b0cf88ae6332b03c1c5fd4aaf8de27fa8956d43fe20ef" exitCode=0 Feb 01 14:24:26 crc kubenswrapper[4820]: I0201 14:24:26.047048 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7gvg" event={"ID":"90ef31dc-afaa-4644-a842-43b67375e125","Type":"ContainerDied","Data":"753c5f81b0b4e335694b0cf88ae6332b03c1c5fd4aaf8de27fa8956d43fe20ef"} Feb 01 14:24:27 crc kubenswrapper[4820]: I0201 14:24:27.053809 4820 generic.go:334] "Generic (PLEG): container finished" podID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerID="21d572e11972efd7350d14c3194ae036d09b1f538e37ea2547ba5d5b6044602b" exitCode=0 Feb 01 14:24:27 crc kubenswrapper[4820]: I0201 14:24:27.053910 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5dnc" event={"ID":"5cd3df7b-e150-490b-9785-ccfab6b264b5","Type":"ContainerDied","Data":"21d572e11972efd7350d14c3194ae036d09b1f538e37ea2547ba5d5b6044602b"} Feb 01 14:24:27 crc kubenswrapper[4820]: I0201 14:24:27.058586 4820 generic.go:334] "Generic (PLEG): container finished" podID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerID="0a6b1c512e29dd838af3193ce4944c03b6998a5afd4de09cb5173753546de5d1" exitCode=0 Feb 01 14:24:27 crc kubenswrapper[4820]: I0201 14:24:27.058687 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6kqz" event={"ID":"3dee68b0-a47b-49fd-a889-7bf3bc58c380","Type":"ContainerDied","Data":"0a6b1c512e29dd838af3193ce4944c03b6998a5afd4de09cb5173753546de5d1"} Feb 01 14:24:27 crc kubenswrapper[4820]: I0201 14:24:27.060661 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7gvg" event={"ID":"90ef31dc-afaa-4644-a842-43b67375e125","Type":"ContainerStarted","Data":"92e7fc8b3fb4a2c94a576cc7f2acb6c241a41b9f4d330d338505ad123852f13c"} Feb 01 14:24:27 crc kubenswrapper[4820]: I0201 14:24:27.096358 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7gvg" podStartSLOduration=3.190713482 podStartE2EDuration="1m0.096337184s" podCreationTimestamp="2026-02-01 14:23:27 +0000 UTC" firstStartedPulling="2026-02-01 14:23:29.536593439 +0000 UTC m=+151.056959723" lastFinishedPulling="2026-02-01 14:24:26.442217131 +0000 UTC m=+207.962583425" observedRunningTime="2026-02-01 14:24:27.093814888 +0000 UTC m=+208.614181172" watchObservedRunningTime="2026-02-01 14:24:27.096337184 +0000 UTC m=+208.616703468" Feb 01 14:24:27 crc kubenswrapper[4820]: I0201 14:24:27.693031 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:24:27 crc kubenswrapper[4820]: I0201 14:24:27.693079 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:24:28 crc kubenswrapper[4820]: I0201 14:24:28.896942 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-d7gvg" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="registry-server" probeResult="failure" output=< Feb 01 14:24:28 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 01 14:24:28 crc kubenswrapper[4820]: > Feb 01 14:24:29 crc kubenswrapper[4820]: I0201 14:24:29.071520 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5dnc" event={"ID":"5cd3df7b-e150-490b-9785-ccfab6b264b5","Type":"ContainerStarted","Data":"65970a680622174be9829cba1fb57e0dd75afd616f0bd1b0f162d57af0062efd"} Feb 01 14:24:29 crc kubenswrapper[4820]: I0201 14:24:29.073711 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6kqz" event={"ID":"3dee68b0-a47b-49fd-a889-7bf3bc58c380","Type":"ContainerStarted","Data":"7f2a1ae2c4f9950cd8f0aeafd780dbbb7a4796dc9aacc7ca394e0a0e084da54e"} Feb 01 14:24:29 crc kubenswrapper[4820]: I0201 14:24:29.089402 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z5dnc" podStartSLOduration=5.081807944 podStartE2EDuration="1m3.089387992s" podCreationTimestamp="2026-02-01 14:23:26 +0000 UTC" firstStartedPulling="2026-02-01 14:23:29.541196141 +0000 UTC m=+151.061562425" lastFinishedPulling="2026-02-01 14:24:27.548776189 +0000 UTC m=+209.069142473" observedRunningTime="2026-02-01 14:24:29.087232767 +0000 UTC m=+210.607599051" watchObservedRunningTime="2026-02-01 14:24:29.089387992 +0000 UTC m=+210.609754266" Feb 01 14:24:29 crc kubenswrapper[4820]: I0201 14:24:29.107574 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6kqz" podStartSLOduration=4.113433455 podStartE2EDuration="1m3.107561136s" podCreationTimestamp="2026-02-01 14:23:26 +0000 UTC" firstStartedPulling="2026-02-01 14:23:28.519308976 +0000 UTC m=+150.039675260" lastFinishedPulling="2026-02-01 14:24:27.513436667 +0000 UTC m=+209.033802941" observedRunningTime="2026-02-01 14:24:29.103077079 +0000 UTC m=+210.623443363" watchObservedRunningTime="2026-02-01 14:24:29.107561136 +0000 UTC m=+210.627927420" Feb 01 14:24:36 crc kubenswrapper[4820]: I0201 14:24:36.964059 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:24:36 crc kubenswrapper[4820]: I0201 14:24:36.965087 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:24:37 crc kubenswrapper[4820]: I0201 14:24:37.092736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:24:37 crc kubenswrapper[4820]: I0201 14:24:37.182774 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:24:37 crc kubenswrapper[4820]: I0201 14:24:37.215536 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:24:37 crc kubenswrapper[4820]: I0201 14:24:37.215584 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:24:37 crc kubenswrapper[4820]: I0201 14:24:37.260202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:24:37 crc kubenswrapper[4820]: I0201 14:24:37.740407 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:24:37 crc kubenswrapper[4820]: I0201 14:24:37.781110 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:24:38 crc kubenswrapper[4820]: I0201 14:24:38.172121 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:24:38 crc kubenswrapper[4820]: I0201 14:24:38.431841 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7gvg"] Feb 01 14:24:39 crc kubenswrapper[4820]: I0201 14:24:39.131435 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d7gvg" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="registry-server" containerID="cri-o://92e7fc8b3fb4a2c94a576cc7f2acb6c241a41b9f4d330d338505ad123852f13c" gracePeriod=2 Feb 01 14:24:40 crc kubenswrapper[4820]: I0201 14:24:40.138059 4820 generic.go:334] "Generic (PLEG): container finished" podID="90ef31dc-afaa-4644-a842-43b67375e125" containerID="92e7fc8b3fb4a2c94a576cc7f2acb6c241a41b9f4d330d338505ad123852f13c" exitCode=0 Feb 01 14:24:40 crc kubenswrapper[4820]: I0201 14:24:40.138123 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7gvg" event={"ID":"90ef31dc-afaa-4644-a842-43b67375e125","Type":"ContainerDied","Data":"92e7fc8b3fb4a2c94a576cc7f2acb6c241a41b9f4d330d338505ad123852f13c"} Feb 01 14:24:40 crc kubenswrapper[4820]: I0201 14:24:40.985407 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.129830 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-utilities\") pod \"90ef31dc-afaa-4644-a842-43b67375e125\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.129930 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqdj7\" (UniqueName: \"kubernetes.io/projected/90ef31dc-afaa-4644-a842-43b67375e125-kube-api-access-gqdj7\") pod \"90ef31dc-afaa-4644-a842-43b67375e125\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.129984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-catalog-content\") pod \"90ef31dc-afaa-4644-a842-43b67375e125\" (UID: \"90ef31dc-afaa-4644-a842-43b67375e125\") " Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.130893 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-utilities" (OuterVolumeSpecName: "utilities") pod "90ef31dc-afaa-4644-a842-43b67375e125" (UID: "90ef31dc-afaa-4644-a842-43b67375e125"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.135656 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ef31dc-afaa-4644-a842-43b67375e125-kube-api-access-gqdj7" (OuterVolumeSpecName: "kube-api-access-gqdj7") pod "90ef31dc-afaa-4644-a842-43b67375e125" (UID: "90ef31dc-afaa-4644-a842-43b67375e125"). InnerVolumeSpecName "kube-api-access-gqdj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.145203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7gvg" event={"ID":"90ef31dc-afaa-4644-a842-43b67375e125","Type":"ContainerDied","Data":"00a961cadc177ea2ea59efa9c827b6cdae03617bfd0407912dd64be48023a78c"} Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.145255 4820 scope.go:117] "RemoveContainer" containerID="92e7fc8b3fb4a2c94a576cc7f2acb6c241a41b9f4d330d338505ad123852f13c" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.145289 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7gvg" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.184239 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90ef31dc-afaa-4644-a842-43b67375e125" (UID: "90ef31dc-afaa-4644-a842-43b67375e125"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.231260 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.231289 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ef31dc-afaa-4644-a842-43b67375e125-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.231300 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqdj7\" (UniqueName: \"kubernetes.io/projected/90ef31dc-afaa-4644-a842-43b67375e125-kube-api-access-gqdj7\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.466232 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7gvg"] Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.468993 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d7gvg"] Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.792474 4820 scope.go:117] "RemoveContainer" containerID="753c5f81b0b4e335694b0cf88ae6332b03c1c5fd4aaf8de27fa8956d43fe20ef" Feb 01 14:24:41 crc kubenswrapper[4820]: I0201 14:24:41.833031 4820 scope.go:117] "RemoveContainer" containerID="4ef44a45b3bf6a913f4391983f489b4b0067ec80492123b3b77aec6545f0e313" Feb 01 14:24:42 crc kubenswrapper[4820]: I0201 14:24:42.153313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jd9wc" event={"ID":"df4ea2d8-4be0-4e30-b48a-484a93d725b0","Type":"ContainerStarted","Data":"bd6f7b4a3bb7848abeca988d2f1dd4f6f8dcca89b422d4f81be4252702c3d5f8"} Feb 01 14:24:42 crc kubenswrapper[4820]: I0201 14:24:42.162611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nswhf" event={"ID":"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b","Type":"ContainerStarted","Data":"92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0"} Feb 01 14:24:42 crc kubenswrapper[4820]: I0201 14:24:42.168471 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mbbr" event={"ID":"382873c4-83aa-4693-9eb8-7b1f41b0f22b","Type":"ContainerStarted","Data":"1a4cc2c26b15cf620b1e7048ef115b43011274df4bf0be8b495516e1d0b04244"} Feb 01 14:24:42 crc kubenswrapper[4820]: I0201 14:24:42.171368 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkfmb" event={"ID":"f73b6fb9-8420-42fe-9b3d-42d17a204743","Type":"ContainerStarted","Data":"56ebc4d6736e61a21a907522e3ea546446f68b8d5e8df8e925d9ec730b9c8055"} Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.179116 4820 generic.go:334] "Generic (PLEG): container finished" podID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerID="1a4cc2c26b15cf620b1e7048ef115b43011274df4bf0be8b495516e1d0b04244" exitCode=0 Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.179162 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mbbr" event={"ID":"382873c4-83aa-4693-9eb8-7b1f41b0f22b","Type":"ContainerDied","Data":"1a4cc2c26b15cf620b1e7048ef115b43011274df4bf0be8b495516e1d0b04244"} Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.181552 4820 generic.go:334] "Generic (PLEG): container finished" podID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerID="bd6f7b4a3bb7848abeca988d2f1dd4f6f8dcca89b422d4f81be4252702c3d5f8" exitCode=0 Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.181595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jd9wc" event={"ID":"df4ea2d8-4be0-4e30-b48a-484a93d725b0","Type":"ContainerDied","Data":"bd6f7b4a3bb7848abeca988d2f1dd4f6f8dcca89b422d4f81be4252702c3d5f8"} Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.183408 4820 generic.go:334] "Generic (PLEG): container finished" podID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerID="073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603" exitCode=0 Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.183466 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmllb" event={"ID":"4fdec671-8c4f-4814-80f3-eb6580b4a706","Type":"ContainerDied","Data":"073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603"} Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.194712 4820 generic.go:334] "Generic (PLEG): container finished" podID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerID="92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0" exitCode=0 Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.194767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nswhf" event={"ID":"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b","Type":"ContainerDied","Data":"92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0"} Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.200376 4820 generic.go:334] "Generic (PLEG): container finished" podID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerID="56ebc4d6736e61a21a907522e3ea546446f68b8d5e8df8e925d9ec730b9c8055" exitCode=0 Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.207374 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ef31dc-afaa-4644-a842-43b67375e125" path="/var/lib/kubelet/pods/90ef31dc-afaa-4644-a842-43b67375e125/volumes" Feb 01 14:24:43 crc kubenswrapper[4820]: I0201 14:24:43.208169 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkfmb" event={"ID":"f73b6fb9-8420-42fe-9b3d-42d17a204743","Type":"ContainerDied","Data":"56ebc4d6736e61a21a907522e3ea546446f68b8d5e8df8e925d9ec730b9c8055"} Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.207456 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jd9wc" event={"ID":"df4ea2d8-4be0-4e30-b48a-484a93d725b0","Type":"ContainerStarted","Data":"71b634e1b2b5ce2c0ef7e0aec1ab54abc6a4acb171985584224708847580c2fb"} Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.209666 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmllb" event={"ID":"4fdec671-8c4f-4814-80f3-eb6580b4a706","Type":"ContainerStarted","Data":"cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879"} Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.211585 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nswhf" event={"ID":"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b","Type":"ContainerStarted","Data":"546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb"} Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.213712 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mbbr" event={"ID":"382873c4-83aa-4693-9eb8-7b1f41b0f22b","Type":"ContainerStarted","Data":"b9a36891771077760a0a0aa8a2f281a5b979dd76bbb323b24333a706074f7bce"} Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.215786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkfmb" event={"ID":"f73b6fb9-8420-42fe-9b3d-42d17a204743","Type":"ContainerStarted","Data":"4fefd273802b187a6cf47320b83f79969469dd466c1620fd72f160c512720fbe"} Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.249626 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jd9wc" podStartSLOduration=3.16829236 podStartE2EDuration="1m15.249584658s" podCreationTimestamp="2026-02-01 14:23:29 +0000 UTC" firstStartedPulling="2026-02-01 14:23:31.603736442 +0000 UTC m=+153.124102716" lastFinishedPulling="2026-02-01 14:24:43.68502873 +0000 UTC m=+225.205395014" observedRunningTime="2026-02-01 14:24:44.231388224 +0000 UTC m=+225.751754518" watchObservedRunningTime="2026-02-01 14:24:44.249584658 +0000 UTC m=+225.769950942" Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.253443 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkfmb" podStartSLOduration=3.245128347 podStartE2EDuration="1m16.253426338s" podCreationTimestamp="2026-02-01 14:23:28 +0000 UTC" firstStartedPulling="2026-02-01 14:23:30.580078649 +0000 UTC m=+152.100444933" lastFinishedPulling="2026-02-01 14:24:43.58837664 +0000 UTC m=+225.108742924" observedRunningTime="2026-02-01 14:24:44.249732362 +0000 UTC m=+225.770098656" watchObservedRunningTime="2026-02-01 14:24:44.253426338 +0000 UTC m=+225.773792622" Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.268327 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8mbbr" podStartSLOduration=2.208761767 podStartE2EDuration="1m14.268308016s" podCreationTimestamp="2026-02-01 14:23:30 +0000 UTC" firstStartedPulling="2026-02-01 14:23:31.607715458 +0000 UTC m=+153.128081742" lastFinishedPulling="2026-02-01 14:24:43.667261707 +0000 UTC m=+225.187627991" observedRunningTime="2026-02-01 14:24:44.267071484 +0000 UTC m=+225.787437768" watchObservedRunningTime="2026-02-01 14:24:44.268308016 +0000 UTC m=+225.788674300" Feb 01 14:24:44 crc kubenswrapper[4820]: I0201 14:24:44.287052 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nswhf" podStartSLOduration=2.050787391 podStartE2EDuration="1m15.287031624s" podCreationTimestamp="2026-02-01 14:23:29 +0000 UTC" firstStartedPulling="2026-02-01 14:23:30.580020518 +0000 UTC m=+152.100386802" lastFinishedPulling="2026-02-01 14:24:43.816264751 +0000 UTC m=+225.336631035" observedRunningTime="2026-02-01 14:24:44.286999143 +0000 UTC m=+225.807365427" watchObservedRunningTime="2026-02-01 14:24:44.287031624 +0000 UTC m=+225.807397908" Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.377152 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.378268 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.410574 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.428402 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dmllb" podStartSLOduration=6.393183087 podStartE2EDuration="1m20.428384539s" podCreationTimestamp="2026-02-01 14:23:27 +0000 UTC" firstStartedPulling="2026-02-01 14:23:29.549433811 +0000 UTC m=+151.069800095" lastFinishedPulling="2026-02-01 14:24:43.584635263 +0000 UTC m=+225.105001547" observedRunningTime="2026-02-01 14:24:44.314052449 +0000 UTC m=+225.834418733" watchObservedRunningTime="2026-02-01 14:24:47.428384539 +0000 UTC m=+228.948750843" Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.621804 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c85bd9594-zstst"] Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.622366 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" podUID="f705a784-9e52-4237-9b15-c7f61ff9ed29" containerName="controller-manager" containerID="cri-o://ddb6a971db101b68662a47c8878ce6a323a89abbafc6f25b6267db07f6bd4e12" gracePeriod=30 Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.713479 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6"] Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.713772 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" podUID="335859ad-cb89-458a-9488-8afc994f9471" containerName="route-controller-manager" containerID="cri-o://e49215ebda3e46e1eb8c1d2ca294f3cfb1aba44ae11458560357a11fcda5634c" gracePeriod=30 Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.914181 4820 patch_prober.go:28] interesting pod/route-controller-manager-7888bd7874-kzrp6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 01 14:24:47 crc kubenswrapper[4820]: I0201 14:24:47.914554 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" podUID="335859ad-cb89-458a-9488-8afc994f9471" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 01 14:24:48 crc kubenswrapper[4820]: I0201 14:24:48.278025 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:24:48 crc kubenswrapper[4820]: I0201 14:24:48.582986 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f4mwh"] Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.167776 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.168722 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.218541 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.242853 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.242915 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.242957 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.243490 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.243537 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4" gracePeriod=600 Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.250599 4820 generic.go:334] "Generic (PLEG): container finished" podID="335859ad-cb89-458a-9488-8afc994f9471" containerID="e49215ebda3e46e1eb8c1d2ca294f3cfb1aba44ae11458560357a11fcda5634c" exitCode=0 Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.250705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" event={"ID":"335859ad-cb89-458a-9488-8afc994f9471","Type":"ContainerDied","Data":"e49215ebda3e46e1eb8c1d2ca294f3cfb1aba44ae11458560357a11fcda5634c"} Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.260053 4820 generic.go:334] "Generic (PLEG): container finished" podID="f705a784-9e52-4237-9b15-c7f61ff9ed29" containerID="ddb6a971db101b68662a47c8878ce6a323a89abbafc6f25b6267db07f6bd4e12" exitCode=0 Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.260167 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" event={"ID":"f705a784-9e52-4237-9b15-c7f61ff9ed29","Type":"ContainerDied","Data":"ddb6a971db101b68662a47c8878ce6a323a89abbafc6f25b6267db07f6bd4e12"} Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.315421 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.598024 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.599624 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.599684 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.647791 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85895dc97d-n2g5s"] Feb 01 14:24:49 crc kubenswrapper[4820]: E0201 14:24:49.648106 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="extract-content" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.648136 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="extract-content" Feb 01 14:24:49 crc kubenswrapper[4820]: E0201 14:24:49.648150 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="extract-utilities" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.648167 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="extract-utilities" Feb 01 14:24:49 crc kubenswrapper[4820]: E0201 14:24:49.648179 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f705a784-9e52-4237-9b15-c7f61ff9ed29" containerName="controller-manager" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.648186 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f705a784-9e52-4237-9b15-c7f61ff9ed29" containerName="controller-manager" Feb 01 14:24:49 crc kubenswrapper[4820]: E0201 14:24:49.648198 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0801e2ec-1a96-4dce-b86f-7f753b4150f4" containerName="pruner" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.648204 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0801e2ec-1a96-4dce-b86f-7f753b4150f4" containerName="pruner" Feb 01 14:24:49 crc kubenswrapper[4820]: E0201 14:24:49.648213 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="registry-server" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.648219 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="registry-server" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.648444 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0801e2ec-1a96-4dce-b86f-7f753b4150f4" containerName="pruner" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.648474 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f705a784-9e52-4237-9b15-c7f61ff9ed29" containerName="controller-manager" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.648504 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ef31dc-afaa-4644-a842-43b67375e125" containerName="registry-server" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.649075 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.649186 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.653418 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.674215 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85895dc97d-n2g5s"] Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.735740 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h7wf\" (UniqueName: \"kubernetes.io/projected/f705a784-9e52-4237-9b15-c7f61ff9ed29-kube-api-access-6h7wf\") pod \"f705a784-9e52-4237-9b15-c7f61ff9ed29\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.735780 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-client-ca\") pod \"f705a784-9e52-4237-9b15-c7f61ff9ed29\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.735818 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-config\") pod \"f705a784-9e52-4237-9b15-c7f61ff9ed29\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.735888 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-proxy-ca-bundles\") pod \"f705a784-9e52-4237-9b15-c7f61ff9ed29\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.735920 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705a784-9e52-4237-9b15-c7f61ff9ed29-serving-cert\") pod \"f705a784-9e52-4237-9b15-c7f61ff9ed29\" (UID: \"f705a784-9e52-4237-9b15-c7f61ff9ed29\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.737294 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-client-ca" (OuterVolumeSpecName: "client-ca") pod "f705a784-9e52-4237-9b15-c7f61ff9ed29" (UID: "f705a784-9e52-4237-9b15-c7f61ff9ed29"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.737722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-config" (OuterVolumeSpecName: "config") pod "f705a784-9e52-4237-9b15-c7f61ff9ed29" (UID: "f705a784-9e52-4237-9b15-c7f61ff9ed29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.738168 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f705a784-9e52-4237-9b15-c7f61ff9ed29" (UID: "f705a784-9e52-4237-9b15-c7f61ff9ed29"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.742382 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f705a784-9e52-4237-9b15-c7f61ff9ed29-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f705a784-9e52-4237-9b15-c7f61ff9ed29" (UID: "f705a784-9e52-4237-9b15-c7f61ff9ed29"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.742788 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f705a784-9e52-4237-9b15-c7f61ff9ed29-kube-api-access-6h7wf" (OuterVolumeSpecName: "kube-api-access-6h7wf") pod "f705a784-9e52-4237-9b15-c7f61ff9ed29" (UID: "f705a784-9e52-4237-9b15-c7f61ff9ed29"). InnerVolumeSpecName "kube-api-access-6h7wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.836769 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j64lf\" (UniqueName: \"kubernetes.io/projected/335859ad-cb89-458a-9488-8afc994f9471-kube-api-access-j64lf\") pod \"335859ad-cb89-458a-9488-8afc994f9471\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.836855 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-config\") pod \"335859ad-cb89-458a-9488-8afc994f9471\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.836943 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-client-ca\") pod \"335859ad-cb89-458a-9488-8afc994f9471\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837029 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335859ad-cb89-458a-9488-8afc994f9471-serving-cert\") pod \"335859ad-cb89-458a-9488-8afc994f9471\" (UID: \"335859ad-cb89-458a-9488-8afc994f9471\") " Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837183 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-client-ca\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837234 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-proxy-ca-bundles\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837262 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-config\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837287 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-serving-cert\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837338 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pntq\" (UniqueName: \"kubernetes.io/projected/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-kube-api-access-2pntq\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837373 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837384 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f705a784-9e52-4237-9b15-c7f61ff9ed29-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837394 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h7wf\" (UniqueName: \"kubernetes.io/projected/f705a784-9e52-4237-9b15-c7f61ff9ed29-kube-api-access-6h7wf\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837405 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.837414 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f705a784-9e52-4237-9b15-c7f61ff9ed29-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.838016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-config" (OuterVolumeSpecName: "config") pod "335859ad-cb89-458a-9488-8afc994f9471" (UID: "335859ad-cb89-458a-9488-8afc994f9471"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.838129 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-client-ca" (OuterVolumeSpecName: "client-ca") pod "335859ad-cb89-458a-9488-8afc994f9471" (UID: "335859ad-cb89-458a-9488-8afc994f9471"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.839901 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335859ad-cb89-458a-9488-8afc994f9471-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "335859ad-cb89-458a-9488-8afc994f9471" (UID: "335859ad-cb89-458a-9488-8afc994f9471"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.840613 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/335859ad-cb89-458a-9488-8afc994f9471-kube-api-access-j64lf" (OuterVolumeSpecName: "kube-api-access-j64lf") pod "335859ad-cb89-458a-9488-8afc994f9471" (UID: "335859ad-cb89-458a-9488-8afc994f9471"). InnerVolumeSpecName "kube-api-access-j64lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.938823 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-config\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.938940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-serving-cert\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.939003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pntq\" (UniqueName: \"kubernetes.io/projected/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-kube-api-access-2pntq\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.939045 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-client-ca\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.939092 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-proxy-ca-bundles\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.939158 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.939173 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335859ad-cb89-458a-9488-8afc994f9471-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.939185 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j64lf\" (UniqueName: \"kubernetes.io/projected/335859ad-cb89-458a-9488-8afc994f9471-kube-api-access-j64lf\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.939198 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335859ad-cb89-458a-9488-8afc994f9471-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.940265 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-client-ca\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.940444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-proxy-ca-bundles\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.941748 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-config\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.942323 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-serving-cert\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.954048 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pntq\" (UniqueName: \"kubernetes.io/projected/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-kube-api-access-2pntq\") pod \"controller-manager-85895dc97d-n2g5s\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:49 crc kubenswrapper[4820]: I0201 14:24:49.968324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.137433 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85895dc97d-n2g5s"] Feb 01 14:24:50 crc kubenswrapper[4820]: W0201 14:24:50.143577 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b6424e9_d2fd_4597_9a72_4cb2adf24b9f.slice/crio-6b43749d617096bbb8fa4cc5fb3d97633c969ec7cc1d696c7b7f1bd812f5f1ce WatchSource:0}: Error finding container 6b43749d617096bbb8fa4cc5fb3d97633c969ec7cc1d696c7b7f1bd812f5f1ce: Status 404 returned error can't find the container with id 6b43749d617096bbb8fa4cc5fb3d97633c969ec7cc1d696c7b7f1bd812f5f1ce Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.221046 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.221097 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.258981 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.265556 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" event={"ID":"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f","Type":"ContainerStarted","Data":"6b43749d617096bbb8fa4cc5fb3d97633c969ec7cc1d696c7b7f1bd812f5f1ce"} Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.267072 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" event={"ID":"335859ad-cb89-458a-9488-8afc994f9471","Type":"ContainerDied","Data":"6bdf8194527d0d7b30cf23963854874a4c4c66b7a63e13007d25cfd5339a78e0"} Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.267146 4820 scope.go:117] "RemoveContainer" containerID="e49215ebda3e46e1eb8c1d2ca294f3cfb1aba44ae11458560357a11fcda5634c" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.267092 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.268759 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4" exitCode=0 Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.268799 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4"} Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.271476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" event={"ID":"f705a784-9e52-4237-9b15-c7f61ff9ed29","Type":"ContainerDied","Data":"3a285d04c70f43cdb7734f6e0f3878365ff37a3d8d223e9be705436e5080ba94"} Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.271657 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c85bd9594-zstst" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.306708 4820 scope.go:117] "RemoveContainer" containerID="ddb6a971db101b68662a47c8878ce6a323a89abbafc6f25b6267db07f6bd4e12" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.315139 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.324424 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.397399 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.397670 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.403302 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c85bd9594-zstst"] Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.406528 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c85bd9594-zstst"] Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.425957 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6"] Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.429699 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7888bd7874-kzrp6"] Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.442464 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmllb"] Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.442749 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dmllb" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerName="registry-server" containerID="cri-o://cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879" gracePeriod=2 Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.518400 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.870956 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:24:50 crc kubenswrapper[4820]: I0201 14:24:50.998013 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjjlw\" (UniqueName: \"kubernetes.io/projected/4fdec671-8c4f-4814-80f3-eb6580b4a706-kube-api-access-gjjlw\") pod \"4fdec671-8c4f-4814-80f3-eb6580b4a706\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:50.998128 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-catalog-content\") pod \"4fdec671-8c4f-4814-80f3-eb6580b4a706\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:50.998212 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-utilities\") pod \"4fdec671-8c4f-4814-80f3-eb6580b4a706\" (UID: \"4fdec671-8c4f-4814-80f3-eb6580b4a706\") " Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:50.998938 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-utilities" (OuterVolumeSpecName: "utilities") pod "4fdec671-8c4f-4814-80f3-eb6580b4a706" (UID: "4fdec671-8c4f-4814-80f3-eb6580b4a706"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.004966 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdec671-8c4f-4814-80f3-eb6580b4a706-kube-api-access-gjjlw" (OuterVolumeSpecName: "kube-api-access-gjjlw") pod "4fdec671-8c4f-4814-80f3-eb6580b4a706" (UID: "4fdec671-8c4f-4814-80f3-eb6580b4a706"). InnerVolumeSpecName "kube-api-access-gjjlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.058839 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fdec671-8c4f-4814-80f3-eb6580b4a706" (UID: "4fdec671-8c4f-4814-80f3-eb6580b4a706"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.098970 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.099000 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdec671-8c4f-4814-80f3-eb6580b4a706-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.099013 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjjlw\" (UniqueName: \"kubernetes.io/projected/4fdec671-8c4f-4814-80f3-eb6580b4a706-kube-api-access-gjjlw\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.218384 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="335859ad-cb89-458a-9488-8afc994f9471" path="/var/lib/kubelet/pods/335859ad-cb89-458a-9488-8afc994f9471/volumes" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.219115 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f705a784-9e52-4237-9b15-c7f61ff9ed29" path="/var/lib/kubelet/pods/f705a784-9e52-4237-9b15-c7f61ff9ed29/volumes" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.278656 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"7b99d9875d1bdea691633e5af419b10acd106e3faee10becf6662bf488aeca9f"} Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.281444 4820 generic.go:334] "Generic (PLEG): container finished" podID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerID="cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879" exitCode=0 Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.281502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmllb" event={"ID":"4fdec671-8c4f-4814-80f3-eb6580b4a706","Type":"ContainerDied","Data":"cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879"} Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.281530 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmllb" event={"ID":"4fdec671-8c4f-4814-80f3-eb6580b4a706","Type":"ContainerDied","Data":"a356323a3713721c01f356c10d1a95d52791fa273040fed553104d74405a7db5"} Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.281546 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmllb" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.281549 4820 scope.go:117] "RemoveContainer" containerID="cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.282932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" event={"ID":"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f","Type":"ContainerStarted","Data":"4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c"} Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.283118 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.288058 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.307048 4820 scope.go:117] "RemoveContainer" containerID="073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.316738 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" podStartSLOduration=4.316708558 podStartE2EDuration="4.316708558s" podCreationTimestamp="2026-02-01 14:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:24:51.314601763 +0000 UTC m=+232.834968067" watchObservedRunningTime="2026-02-01 14:24:51.316708558 +0000 UTC m=+232.837074842" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.330362 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmllb"] Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.333734 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.335787 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dmllb"] Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.339550 4820 scope.go:117] "RemoveContainer" containerID="7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.353450 4820 scope.go:117] "RemoveContainer" containerID="cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879" Feb 01 14:24:51 crc kubenswrapper[4820]: E0201 14:24:51.356254 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879\": container with ID starting with cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879 not found: ID does not exist" containerID="cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.356321 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879"} err="failed to get container status \"cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879\": rpc error: code = NotFound desc = could not find container \"cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879\": container with ID starting with cde3a8fe51a1070993b99059044a35e0fc86005e83995e6cdf07106bbd65d879 not found: ID does not exist" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.356347 4820 scope.go:117] "RemoveContainer" containerID="073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603" Feb 01 14:24:51 crc kubenswrapper[4820]: E0201 14:24:51.356710 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603\": container with ID starting with 073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603 not found: ID does not exist" containerID="073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.356768 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603"} err="failed to get container status \"073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603\": rpc error: code = NotFound desc = could not find container \"073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603\": container with ID starting with 073be3f76e887996f93fc19a1e484ffa846dcc72bbb66bdcdbf320e743725603 not found: ID does not exist" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.356789 4820 scope.go:117] "RemoveContainer" containerID="7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6" Feb 01 14:24:51 crc kubenswrapper[4820]: E0201 14:24:51.357036 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6\": container with ID starting with 7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6 not found: ID does not exist" containerID="7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6" Feb 01 14:24:51 crc kubenswrapper[4820]: I0201 14:24:51.357063 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6"} err="failed to get container status \"7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6\": rpc error: code = NotFound desc = could not find container \"7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6\": container with ID starting with 7a9afa16c534b7c4d4525ab405e5c3f51f08e564287a544066246f2a637351e6 not found: ID does not exist" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.615726 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf"] Feb 01 14:24:52 crc kubenswrapper[4820]: E0201 14:24:52.615966 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335859ad-cb89-458a-9488-8afc994f9471" containerName="route-controller-manager" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.615979 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="335859ad-cb89-458a-9488-8afc994f9471" containerName="route-controller-manager" Feb 01 14:24:52 crc kubenswrapper[4820]: E0201 14:24:52.615986 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerName="extract-utilities" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.615992 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerName="extract-utilities" Feb 01 14:24:52 crc kubenswrapper[4820]: E0201 14:24:52.616014 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerName="registry-server" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.616020 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerName="registry-server" Feb 01 14:24:52 crc kubenswrapper[4820]: E0201 14:24:52.616029 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerName="extract-content" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.616035 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerName="extract-content" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.616165 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" containerName="registry-server" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.616176 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="335859ad-cb89-458a-9488-8afc994f9471" containerName="route-controller-manager" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.616507 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620216 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620304 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620385 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620438 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c859d947-1117-42ca-b3d8-605bb491d4b3-serving-cert\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-client-ca\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620599 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620693 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620598 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-config\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620665 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.620950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj46s\" (UniqueName: \"kubernetes.io/projected/c859d947-1117-42ca-b3d8-605bb491d4b3-kube-api-access-mj46s\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.659518 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf"] Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.721868 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj46s\" (UniqueName: \"kubernetes.io/projected/c859d947-1117-42ca-b3d8-605bb491d4b3-kube-api-access-mj46s\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.721951 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c859d947-1117-42ca-b3d8-605bb491d4b3-serving-cert\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.721986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-client-ca\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.722021 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-config\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.723132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-config\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.723645 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-client-ca\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.728455 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c859d947-1117-42ca-b3d8-605bb491d4b3-serving-cert\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.741921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj46s\" (UniqueName: \"kubernetes.io/projected/c859d947-1117-42ca-b3d8-605bb491d4b3-kube-api-access-mj46s\") pod \"route-controller-manager-5f4c4d8458-lx9hf\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.831442 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8mbbr"] Feb 01 14:24:52 crc kubenswrapper[4820]: I0201 14:24:52.943548 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.205696 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fdec671-8c4f-4814-80f3-eb6580b4a706" path="/var/lib/kubelet/pods/4fdec671-8c4f-4814-80f3-eb6580b4a706/volumes" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.299168 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8mbbr" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerName="registry-server" containerID="cri-o://b9a36891771077760a0a0aa8a2f281a5b979dd76bbb323b24333a706074f7bce" gracePeriod=2 Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.318893 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf"] Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.562353 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.563457 4820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.563731 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564137 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef" gracePeriod=15 Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564021 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d" gracePeriod=15 Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564088 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12" gracePeriod=15 Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564093 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51" gracePeriod=15 Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564338 4820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564064 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d" gracePeriod=15 Feb 01 14:24:53 crc kubenswrapper[4820]: E0201 14:24:53.564514 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564525 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 14:24:53 crc kubenswrapper[4820]: E0201 14:24:53.564539 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564544 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 14:24:53 crc kubenswrapper[4820]: E0201 14:24:53.564551 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564558 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 14:24:53 crc kubenswrapper[4820]: E0201 14:24:53.564567 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564575 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 01 14:24:53 crc kubenswrapper[4820]: E0201 14:24:53.564583 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564591 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 14:24:53 crc kubenswrapper[4820]: E0201 14:24:53.564601 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564608 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 14:24:53 crc kubenswrapper[4820]: E0201 14:24:53.564618 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564624 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564708 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564718 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564728 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564734 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564745 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.564751 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.616859 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.734285 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.734326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.734350 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.734377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.734400 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.734422 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.734625 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.734699 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835616 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835672 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835702 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835723 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835733 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835795 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835751 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835821 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835864 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835920 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835916 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.835997 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: I0201 14:24:53.913488 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:24:53 crc kubenswrapper[4820]: W0201 14:24:53.935555 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c05e04fe00a43ce190da90b90f0e33f8ecc2bab7faa5e60d38b5e256f4e6511f WatchSource:0}: Error finding container c05e04fe00a43ce190da90b90f0e33f8ecc2bab7faa5e60d38b5e256f4e6511f: Status 404 returned error can't find the container with id c05e04fe00a43ce190da90b90f0e33f8ecc2bab7faa5e60d38b5e256f4e6511f Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.304340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c05e04fe00a43ce190da90b90f0e33f8ecc2bab7faa5e60d38b5e256f4e6511f"} Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.306179 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.307418 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.308319 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef" exitCode=0 Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.308343 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d" exitCode=0 Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.308353 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12" exitCode=0 Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.308361 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51" exitCode=2 Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.308437 4820 scope.go:117] "RemoveContainer" containerID="cd8767e8f2c8f7067d28f79d487e89adb35126e7a11135653e9df7171dccdc0d" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.310385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" event={"ID":"c859d947-1117-42ca-b3d8-605bb491d4b3","Type":"ContainerStarted","Data":"80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af"} Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.310411 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" event={"ID":"c859d947-1117-42ca-b3d8-605bb491d4b3","Type":"ContainerStarted","Data":"c238bc550ea29c29e2704284b173142cd9531540b5115be323b658233bae379b"} Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.310782 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.312158 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.312611 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.313165 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.314272 4820 generic.go:334] "Generic (PLEG): container finished" podID="3b285186-eeb9-40d4-95cd-bc27968b969f" containerID="049ad51645b7041dfd70c95610b9391fdac2b3663db2441a61a500c2aa81d081" exitCode=0 Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.314337 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b285186-eeb9-40d4-95cd-bc27968b969f","Type":"ContainerDied","Data":"049ad51645b7041dfd70c95610b9391fdac2b3663db2441a61a500c2aa81d081"} Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.314782 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.315288 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.315691 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.316173 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.316920 4820 generic.go:334] "Generic (PLEG): container finished" podID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerID="b9a36891771077760a0a0aa8a2f281a5b979dd76bbb323b24333a706074f7bce" exitCode=0 Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.316957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mbbr" event={"ID":"382873c4-83aa-4693-9eb8-7b1f41b0f22b","Type":"ContainerDied","Data":"b9a36891771077760a0a0aa8a2f281a5b979dd76bbb323b24333a706074f7bce"} Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.604634 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.605265 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.605446 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.605737 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.606153 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.606341 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.646238 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-utilities\") pod \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.646343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxp4l\" (UniqueName: \"kubernetes.io/projected/382873c4-83aa-4693-9eb8-7b1f41b0f22b-kube-api-access-pxp4l\") pod \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.646383 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-catalog-content\") pod \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\" (UID: \"382873c4-83aa-4693-9eb8-7b1f41b0f22b\") " Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.647608 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-utilities" (OuterVolumeSpecName: "utilities") pod "382873c4-83aa-4693-9eb8-7b1f41b0f22b" (UID: "382873c4-83aa-4693-9eb8-7b1f41b0f22b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.651978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382873c4-83aa-4693-9eb8-7b1f41b0f22b-kube-api-access-pxp4l" (OuterVolumeSpecName: "kube-api-access-pxp4l") pod "382873c4-83aa-4693-9eb8-7b1f41b0f22b" (UID: "382873c4-83aa-4693-9eb8-7b1f41b0f22b"). InnerVolumeSpecName "kube-api-access-pxp4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.747213 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxp4l\" (UniqueName: \"kubernetes.io/projected/382873c4-83aa-4693-9eb8-7b1f41b0f22b-kube-api-access-pxp4l\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.747251 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.799279 4820 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.799335 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.811919 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "382873c4-83aa-4693-9eb8-7b1f41b0f22b" (UID: "382873c4-83aa-4693-9eb8-7b1f41b0f22b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:24:54 crc kubenswrapper[4820]: I0201 14:24:54.848425 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/382873c4-83aa-4693-9eb8-7b1f41b0f22b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:55 crc kubenswrapper[4820]: E0201 14:24:55.253733 4820 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" volumeName="registry-storage" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.311319 4820 patch_prober.go:28] interesting pod/route-controller-manager-5f4c4d8458-lx9hf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.311385 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.324860 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mbbr" event={"ID":"382873c4-83aa-4693-9eb8-7b1f41b0f22b","Type":"ContainerDied","Data":"52492a1310392e18f482ded28d1d5c84cbb9b2fcb9b4991a2c841bfdf4cc232a"} Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.324929 4820 scope.go:117] "RemoveContainer" containerID="b9a36891771077760a0a0aa8a2f281a5b979dd76bbb323b24333a706074f7bce" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.324952 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8mbbr" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.325766 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.326065 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.326333 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.326567 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.327142 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4"} Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.327436 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.327726 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.328189 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.328501 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.329986 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.330312 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.330348 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.330564 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.330852 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.345204 4820 scope.go:117] "RemoveContainer" containerID="1a4cc2c26b15cf620b1e7048ef115b43011274df4bf0be8b495516e1d0b04244" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.362339 4820 scope.go:117] "RemoveContainer" containerID="9497117ec84edfbe557be95f5b0c71bef28dc622147ef5f01e4a4afd3948055d" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.610607 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.611127 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.611460 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.611659 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.611832 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.767379 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-var-lock" (OuterVolumeSpecName: "var-lock") pod "3b285186-eeb9-40d4-95cd-bc27968b969f" (UID: "3b285186-eeb9-40d4-95cd-bc27968b969f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.767252 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-var-lock\") pod \"3b285186-eeb9-40d4-95cd-bc27968b969f\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.767736 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-kubelet-dir\") pod \"3b285186-eeb9-40d4-95cd-bc27968b969f\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.767799 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3b285186-eeb9-40d4-95cd-bc27968b969f" (UID: "3b285186-eeb9-40d4-95cd-bc27968b969f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.767829 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b285186-eeb9-40d4-95cd-bc27968b969f-kube-api-access\") pod \"3b285186-eeb9-40d4-95cd-bc27968b969f\" (UID: \"3b285186-eeb9-40d4-95cd-bc27968b969f\") " Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.769050 4820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-var-lock\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.769067 4820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b285186-eeb9-40d4-95cd-bc27968b969f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.773320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b285186-eeb9-40d4-95cd-bc27968b969f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3b285186-eeb9-40d4-95cd-bc27968b969f" (UID: "3b285186-eeb9-40d4-95cd-bc27968b969f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:24:55 crc kubenswrapper[4820]: I0201 14:24:55.871331 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b285186-eeb9-40d4-95cd-bc27968b969f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.331364 4820 patch_prober.go:28] interesting pod/route-controller-manager-5f4c4d8458-lx9hf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.331417 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.339899 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.340579 4820 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d" exitCode=0 Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.342358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3b285186-eeb9-40d4-95cd-bc27968b969f","Type":"ContainerDied","Data":"bcd1366b3d33af59c3f342f2863f24682b0ecf6eb012726334b54e39297b8f3b"} Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.342407 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd1366b3d33af59c3f342f2863f24682b0ecf6eb012726334b54e39297b8f3b" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.342372 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.395494 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.395950 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.396458 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.396814 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.487472 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.488349 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.488817 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.489038 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.489239 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.489463 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.489706 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.680614 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.680651 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.680716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.680928 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.680933 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.681888 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.783357 4820 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.783980 4820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:56 crc kubenswrapper[4820]: I0201 14:24:56.784053 4820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.206358 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.349575 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.350449 4820 scope.go:117] "RemoveContainer" containerID="b2d5e104706ee3a231365f8f15698611169508c028d0012a0aef4b47f2280fef" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.350594 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.351426 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.351969 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.352457 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.352735 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.353005 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.353586 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.353766 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.354065 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.354252 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.354421 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.365241 4820 scope.go:117] "RemoveContainer" containerID="4fcb451d31961c18eb96b58273d650c8ca168b51042b25e3896f2056348dec3d" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.375920 4820 scope.go:117] "RemoveContainer" containerID="ee4b8cd384b5d16565983bea985898580394b07addc3a10c95c8b389e043ff12" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.386662 4820 scope.go:117] "RemoveContainer" containerID="35b0e5b1a4a90c8707bc6e5670d74a9112b4aa2ecf216c31a950b99d89006f51" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.397560 4820 scope.go:117] "RemoveContainer" containerID="155e379523650d403be4ab885a8311613c4deca47637a71870c6e48a70c4756d" Feb 01 14:24:57 crc kubenswrapper[4820]: I0201 14:24:57.411066 4820 scope.go:117] "RemoveContainer" containerID="9206d8d073f48c1f820094a0fa485eeeae725f289c21936525a81c904544fb7c" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.635016 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-5f4c4d8458-lx9hf.18902581f412da06 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-5f4c4d8458-lx9hf,UID:c859d947-1117-42ca-b3d8-605bb491d4b3,APIVersion:v1,ResourceVersion:29708,FieldPath:spec.containers{route-controller-manager},},Reason:Created,Message:Created container route-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 14:24:53.63248999 +0000 UTC m=+235.152856274,LastTimestamp:2026-02-01 14:24:53.63248999 +0000 UTC m=+235.152856274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.745157 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:24:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:24:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:24:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T14:24:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.745504 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.745738 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.745918 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.746548 4820 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.746566 4820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.927749 4820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-5f4c4d8458-lx9hf.18902581f412da06 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-5f4c4d8458-lx9hf,UID:c859d947-1117-42ca-b3d8-605bb491d4b3,APIVersion:v1,ResourceVersion:29708,FieldPath:spec.containers{route-controller-manager},},Reason:Created,Message:Created container route-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 14:24:53.63248999 +0000 UTC m=+235.152856274,LastTimestamp:2026-02-01 14:24:53.63248999 +0000 UTC m=+235.152856274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.975570 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.976383 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.976621 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.976927 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.977227 4820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:58 crc kubenswrapper[4820]: I0201 14:24:58.977262 4820 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 01 14:24:58 crc kubenswrapper[4820]: E0201 14:24:58.977790 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Feb 01 14:24:59 crc kubenswrapper[4820]: E0201 14:24:59.178831 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Feb 01 14:24:59 crc kubenswrapper[4820]: I0201 14:24:59.201293 4820 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:59 crc kubenswrapper[4820]: I0201 14:24:59.202180 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:59 crc kubenswrapper[4820]: I0201 14:24:59.203809 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:59 crc kubenswrapper[4820]: I0201 14:24:59.204039 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:59 crc kubenswrapper[4820]: I0201 14:24:59.204264 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:24:59 crc kubenswrapper[4820]: E0201 14:24:59.579389 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Feb 01 14:25:00 crc kubenswrapper[4820]: E0201 14:25:00.380440 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Feb 01 14:25:01 crc kubenswrapper[4820]: E0201 14:25:01.981508 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Feb 01 14:25:03 crc kubenswrapper[4820]: I0201 14:25:03.945526 4820 patch_prober.go:28] interesting pod/route-controller-manager-5f4c4d8458-lx9hf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:25:03 crc kubenswrapper[4820]: I0201 14:25:03.948036 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:25:05 crc kubenswrapper[4820]: E0201 14:25:05.183269 4820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="6.4s" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.198626 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.199514 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.200225 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.200544 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.200826 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.216682 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.216725 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:05 crc kubenswrapper[4820]: E0201 14:25:05.217244 4820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.217956 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:05 crc kubenswrapper[4820]: I0201 14:25:05.394993 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bc36b2cc1e6dab040a8a933ce4b21ba45cf8f26920bef0ba95a9ee357f8cdeb9"} Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.404190 4820 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a3337544478b6244cf932db1600525970cbee7d80f16c9c1b9ea138506b40c0c" exitCode=0 Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.404281 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a3337544478b6244cf932db1600525970cbee7d80f16c9c1b9ea138506b40c0c"} Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.404448 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.404641 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.405306 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: E0201 14:25:06.405304 4820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.405863 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.406394 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.406830 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.407870 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.407921 4820 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f" exitCode=1 Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.407941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f"} Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.408199 4820 scope.go:117] "RemoveContainer" containerID="88f47d2b079af893daf4c2fc6486930488953e419473a4428b0fab84ddea158f" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.408804 4820 status_manager.go:851] "Failed to get status for pod" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" pod="openshift-marketplace/redhat-operators-8mbbr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8mbbr\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.409314 4820 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.409776 4820 status_manager.go:851] "Failed to get status for pod" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.410250 4820 status_manager.go:851] "Failed to get status for pod" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5f4c4d8458-lx9hf\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.410763 4820 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 01 14:25:06 crc kubenswrapper[4820]: I0201 14:25:06.798099 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:25:07 crc kubenswrapper[4820]: I0201 14:25:07.415853 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 01 14:25:07 crc kubenswrapper[4820]: I0201 14:25:07.416181 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"876daaff5df7560587a014ee8e995d448fe870c758d20ed498c432aa0c7e3d5c"} Feb 01 14:25:07 crc kubenswrapper[4820]: I0201 14:25:07.421718 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"975ea9742671036df2f608f93f8280657cb98a0bb2b18463e8762de6d7f09062"} Feb 01 14:25:07 crc kubenswrapper[4820]: I0201 14:25:07.421761 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9288cfa8e7d684320f4f5c30ecb9397d522d5ddf51b737722ef8b4165f956382"} Feb 01 14:25:07 crc kubenswrapper[4820]: I0201 14:25:07.421772 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1037d422f3863d631a3b189c6b5a856e928361e24a8c9ce7fb598e4cbf9a26fd"} Feb 01 14:25:07 crc kubenswrapper[4820]: I0201 14:25:07.421783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"24e7d0cda6cb54ee0de0c73048ce87ff1b2832feea465e8f4082e0d915b4fbdc"} Feb 01 14:25:08 crc kubenswrapper[4820]: I0201 14:25:08.079269 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:25:08 crc kubenswrapper[4820]: I0201 14:25:08.440008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eb370ccb279a329a6da9fb849cb311edf29b57c1fa5dd91169a5686380113bd4"} Feb 01 14:25:08 crc kubenswrapper[4820]: I0201 14:25:08.441146 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:08 crc kubenswrapper[4820]: I0201 14:25:08.441174 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:10 crc kubenswrapper[4820]: I0201 14:25:10.218888 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:10 crc kubenswrapper[4820]: I0201 14:25:10.219213 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:10 crc kubenswrapper[4820]: I0201 14:25:10.226499 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.449310 4820 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.466239 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.466264 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.466290 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.470122 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.473441 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a721c889-1aa4-42bc-8da6-8b66d9d1ebdf" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.516174 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.520636 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.613470 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" podUID="d0e5cde7-6e0f-4213-94d4-746cfdb568e9" containerName="oauth-openshift" containerID="cri-o://bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d" gracePeriod=15 Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.944683 4820 patch_prober.go:28] interesting pod/route-controller-manager-5f4c4d8458-lx9hf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:25:13 crc kubenswrapper[4820]: I0201 14:25:13.945056 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.082275 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.282624 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-policies\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.283202 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-login\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.283333 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-error\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.283415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.284079 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-provider-selection\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.284178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-cliconfig\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.284264 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-idp-0-file-data\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.284328 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzrhk\" (UniqueName: \"kubernetes.io/projected/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-kube-api-access-fzrhk\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.284616 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-ocp-branding-template\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.285152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-router-certs\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.285817 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-service-ca\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.286133 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-dir\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.288231 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.289413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-session\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.289458 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-serving-cert\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.289675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-trusted-ca-bundle\") pod \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\" (UID: \"d0e5cde7-6e0f-4213-94d4-746cfdb568e9\") " Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.293857 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.296004 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.296501 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.296633 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.296723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.296864 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.297125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.297589 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.297602 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-kube-api-access-fzrhk" (OuterVolumeSpecName: "kube-api-access-fzrhk") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "kube-api-access-fzrhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.297861 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.299050 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.299380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d0e5cde7-6e0f-4213-94d4-746cfdb568e9" (UID: "d0e5cde7-6e0f-4213-94d4-746cfdb568e9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.394635 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.394938 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395073 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395169 4820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395270 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395364 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395447 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395532 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395629 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395718 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzrhk\" (UniqueName: \"kubernetes.io/projected/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-kube-api-access-fzrhk\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395840 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.395945 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.396026 4820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.396108 4820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0e5cde7-6e0f-4213-94d4-746cfdb568e9-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.472417 4820 generic.go:334] "Generic (PLEG): container finished" podID="d0e5cde7-6e0f-4213-94d4-746cfdb568e9" containerID="bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d" exitCode=0 Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.472532 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.472627 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" event={"ID":"d0e5cde7-6e0f-4213-94d4-746cfdb568e9","Type":"ContainerDied","Data":"bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d"} Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.472697 4820 scope.go:117] "RemoveContainer" containerID="bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.472788 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f4mwh" event={"ID":"d0e5cde7-6e0f-4213-94d4-746cfdb568e9","Type":"ContainerDied","Data":"ee056db5b4b303aef8dc37aed6ce42cabca130351d993b367ebda045eb9930be"} Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.473155 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.473171 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.508354 4820 scope.go:117] "RemoveContainer" containerID="bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d" Feb 01 14:25:14 crc kubenswrapper[4820]: E0201 14:25:14.508953 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d\": container with ID starting with bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d not found: ID does not exist" containerID="bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d" Feb 01 14:25:14 crc kubenswrapper[4820]: I0201 14:25:14.509000 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d"} err="failed to get container status \"bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d\": rpc error: code = NotFound desc = could not find container \"bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d\": container with ID starting with bfe4f2a5db1e5e180615572207819f154a35230e96e6cf3ae4d65a4608c93e0d not found: ID does not exist" Feb 01 14:25:15 crc kubenswrapper[4820]: I0201 14:25:15.479632 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:15 crc kubenswrapper[4820]: I0201 14:25:15.479668 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:18 crc kubenswrapper[4820]: I0201 14:25:18.083490 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 14:25:19 crc kubenswrapper[4820]: I0201 14:25:19.212766 4820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a721c889-1aa4-42bc-8da6-8b66d9d1ebdf" Feb 01 14:25:23 crc kubenswrapper[4820]: I0201 14:25:23.400371 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 01 14:25:23 crc kubenswrapper[4820]: I0201 14:25:23.550131 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 01 14:25:23 crc kubenswrapper[4820]: I0201 14:25:23.880086 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 01 14:25:23 crc kubenswrapper[4820]: I0201 14:25:23.945092 4820 patch_prober.go:28] interesting pod/route-controller-manager-5f4c4d8458-lx9hf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:25:23 crc kubenswrapper[4820]: I0201 14:25:23.945151 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:25:24 crc kubenswrapper[4820]: I0201 14:25:24.241343 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 14:25:24 crc kubenswrapper[4820]: I0201 14:25:24.528394 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5f4c4d8458-lx9hf_c859d947-1117-42ca-b3d8-605bb491d4b3/route-controller-manager/0.log" Feb 01 14:25:24 crc kubenswrapper[4820]: I0201 14:25:24.528684 4820 generic.go:334] "Generic (PLEG): container finished" podID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerID="80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af" exitCode=255 Feb 01 14:25:24 crc kubenswrapper[4820]: I0201 14:25:24.528715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" event={"ID":"c859d947-1117-42ca-b3d8-605bb491d4b3","Type":"ContainerDied","Data":"80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af"} Feb 01 14:25:24 crc kubenswrapper[4820]: I0201 14:25:24.529279 4820 scope.go:117] "RemoveContainer" containerID="80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af" Feb 01 14:25:24 crc kubenswrapper[4820]: I0201 14:25:24.882224 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 01 14:25:25 crc kubenswrapper[4820]: I0201 14:25:25.054135 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 01 14:25:25 crc kubenswrapper[4820]: I0201 14:25:25.536719 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5f4c4d8458-lx9hf_c859d947-1117-42ca-b3d8-605bb491d4b3/route-controller-manager/0.log" Feb 01 14:25:25 crc kubenswrapper[4820]: I0201 14:25:25.537169 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" event={"ID":"c859d947-1117-42ca-b3d8-605bb491d4b3","Type":"ContainerStarted","Data":"ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e"} Feb 01 14:25:25 crc kubenswrapper[4820]: I0201 14:25:25.537678 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:25:25 crc kubenswrapper[4820]: I0201 14:25:25.625771 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.240201 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.265813 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.382117 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.418209 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.440450 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.521742 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.537990 4820 patch_prober.go:28] interesting pod/route-controller-manager-5f4c4d8458-lx9hf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.538041 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.556589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.582429 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.668531 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.671059 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.889985 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.958073 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 01 14:25:26 crc kubenswrapper[4820]: I0201 14:25:26.976907 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.386631 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.390655 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.409816 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.477690 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.542200 4820 patch_prober.go:28] interesting pod/route-controller-manager-5f4c4d8458-lx9hf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.542262 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.643248 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.751468 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.864996 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.890083 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.893551 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.901200 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 01 14:25:27 crc kubenswrapper[4820]: I0201 14:25:27.903177 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.034726 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.039891 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.193725 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.233856 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.317076 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.374594 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.492849 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.518161 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.528767 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.590921 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.626011 4820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.631311 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.634615 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.766080 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.854524 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.857548 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 01 14:25:28 crc kubenswrapper[4820]: I0201 14:25:28.907326 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.171035 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.225661 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.246846 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.297613 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.427713 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.430212 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.496354 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.525628 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.533518 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.550968 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.562215 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.763618 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.786353 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.928422 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 01 14:25:29 crc kubenswrapper[4820]: I0201 14:25:29.963403 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.003460 4820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.158463 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.213991 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.309253 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.368998 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.478109 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.479905 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.480836 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.502764 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.544463 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.612328 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.691703 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.702777 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.733782 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.739534 4820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.801393 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.841487 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 01 14:25:30 crc kubenswrapper[4820]: I0201 14:25:30.884993 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.228136 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.270772 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.308285 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.340314 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.423220 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.497250 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.508112 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.686487 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.850707 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.873183 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.934935 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.952052 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 01 14:25:31 crc kubenswrapper[4820]: I0201 14:25:31.982085 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.085011 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.087929 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.142999 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.166810 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.185481 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.232478 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.257906 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.301788 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.331220 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.415514 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.441507 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.475998 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.514818 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.641380 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.760108 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.814807 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.930034 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 01 14:25:32 crc kubenswrapper[4820]: I0201 14:25:32.948727 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.065637 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.108117 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.135080 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.150505 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.262590 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.300504 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.365849 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.433805 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.448744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.471473 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.522128 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.588435 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.658190 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.794799 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.852288 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 01 14:25:33 crc kubenswrapper[4820]: I0201 14:25:33.965791 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.055671 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.066222 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.068656 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.076291 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.090675 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.163641 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.166078 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.256944 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.309327 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.434396 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.455556 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.524621 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.527735 4820 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.569439 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.597620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.598534 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.662807 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.715353 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.764943 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.841116 4820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.843189 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podStartSLOduration=47.843127138 podStartE2EDuration="47.843127138s" podCreationTimestamp="2026-02-01 14:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:25:13.138077599 +0000 UTC m=+254.658443903" watchObservedRunningTime="2026-02-01 14:25:34.843127138 +0000 UTC m=+276.363493462" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.845320 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.845308653000004 podStartE2EDuration="41.845308653s" podCreationTimestamp="2026-02-01 14:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:25:13.062615862 +0000 UTC m=+254.582982146" watchObservedRunningTime="2026-02-01 14:25:34.845308653 +0000 UTC m=+276.365674967" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.848792 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8mbbr","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-f4mwh"] Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.848899 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-85766c7959-kd4sr"] Feb 01 14:25:34 crc kubenswrapper[4820]: E0201 14:25:34.849156 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" containerName="installer" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849183 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" containerName="installer" Feb 01 14:25:34 crc kubenswrapper[4820]: E0201 14:25:34.849212 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerName="extract-content" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849226 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerName="extract-content" Feb 01 14:25:34 crc kubenswrapper[4820]: E0201 14:25:34.849245 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerName="registry-server" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849257 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerName="registry-server" Feb 01 14:25:34 crc kubenswrapper[4820]: E0201 14:25:34.849278 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e5cde7-6e0f-4213-94d4-746cfdb568e9" containerName="oauth-openshift" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849290 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e5cde7-6e0f-4213-94d4-746cfdb568e9" containerName="oauth-openshift" Feb 01 14:25:34 crc kubenswrapper[4820]: E0201 14:25:34.849359 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerName="extract-utilities" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849355 4820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849402 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2aea2d10-281a-4986-b42d-205f8c7c1272" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849373 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerName="extract-utilities" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849653 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" containerName="registry-server" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849678 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e5cde7-6e0f-4213-94d4-746cfdb568e9" containerName="oauth-openshift" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.849689 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b285186-eeb9-40d4-95cd-bc27968b969f" containerName="installer" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.850035 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nswhf"] Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.850197 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.850365 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nswhf" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerName="registry-server" containerID="cri-o://546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb" gracePeriod=2 Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.855370 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.855479 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.855383 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.855733 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.856125 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.856377 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.856373 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.856668 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.856982 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.858042 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.858484 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.858547 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.859029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.859365 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.867246 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.868688 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.878409 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.885665 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 01 14:25:34 crc kubenswrapper[4820]: I0201 14:25:34.904842 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.904824714 podStartE2EDuration="21.904824714s" podCreationTimestamp="2026-02-01 14:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:25:34.900947329 +0000 UTC m=+276.421313613" watchObservedRunningTime="2026-02-01 14:25:34.904824714 +0000 UTC m=+276.425190998" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.000194 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.039760 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.039836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.039977 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040004 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-audit-policies\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040038 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06c0a93f-5833-4dc1-8221-2ffadbb60935-audit-dir\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040061 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-login\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-router-certs\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj2th\" (UniqueName: \"kubernetes.io/projected/06c0a93f-5833-4dc1-8221-2ffadbb60935-kube-api-access-hj2th\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040179 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-session\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-error\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040226 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040268 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-service-ca\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.040329 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.080121 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.124568 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142530 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142574 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-audit-policies\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142623 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06c0a93f-5833-4dc1-8221-2ffadbb60935-audit-dir\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142638 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-login\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142659 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-router-certs\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142923 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06c0a93f-5833-4dc1-8221-2ffadbb60935-audit-dir\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142940 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj2th\" (UniqueName: \"kubernetes.io/projected/06c0a93f-5833-4dc1-8221-2ffadbb60935-kube-api-access-hj2th\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142972 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-session\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.142992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-error\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.143008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.143025 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-service-ca\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.143042 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.143070 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.143622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.143824 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.143833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-audit-policies\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.146586 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-service-ca\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.146903 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.150110 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.150194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-login\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.150477 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-router-certs\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.151551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-error\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.155344 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-session\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.159271 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.161041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.169422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/06c0a93f-5833-4dc1-8221-2ffadbb60935-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.171083 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj2th\" (UniqueName: \"kubernetes.io/projected/06c0a93f-5833-4dc1-8221-2ffadbb60935-kube-api-access-hj2th\") pod \"oauth-openshift-85766c7959-kd4sr\" (UID: \"06c0a93f-5833-4dc1-8221-2ffadbb60935\") " pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.171887 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.218478 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.220623 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382873c4-83aa-4693-9eb8-7b1f41b0f22b" path="/var/lib/kubelet/pods/382873c4-83aa-4693-9eb8-7b1f41b0f22b/volumes" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.221951 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e5cde7-6e0f-4213-94d4-746cfdb568e9" path="/var/lib/kubelet/pods/d0e5cde7-6e0f-4213-94d4-746cfdb568e9/volumes" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.245470 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.251132 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.343962 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktn7n\" (UniqueName: \"kubernetes.io/projected/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-kube-api-access-ktn7n\") pod \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.344015 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-utilities\") pod \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.344056 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-catalog-content\") pod \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\" (UID: \"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b\") " Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.344939 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-utilities" (OuterVolumeSpecName: "utilities") pod "26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" (UID: "26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.348412 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-kube-api-access-ktn7n" (OuterVolumeSpecName: "kube-api-access-ktn7n") pod "26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" (UID: "26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b"). InnerVolumeSpecName "kube-api-access-ktn7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.370683 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" (UID: "26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.372928 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.400654 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85766c7959-kd4sr"] Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.411417 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.432282 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.445386 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.445424 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktn7n\" (UniqueName: \"kubernetes.io/projected/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-kube-api-access-ktn7n\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.445437 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.447308 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.517052 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.529083 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.571284 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.575889 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.584058 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.587452 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" event={"ID":"06c0a93f-5833-4dc1-8221-2ffadbb60935","Type":"ContainerStarted","Data":"69da672b59377d11e0a1c4859f9d8079042ce8ad0dc83f12b8e2b98480b49d7e"} Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.589749 4820 generic.go:334] "Generic (PLEG): container finished" podID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerID="546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb" exitCode=0 Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.590666 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nswhf" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.593141 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nswhf" event={"ID":"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b","Type":"ContainerDied","Data":"546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb"} Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.593509 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nswhf" event={"ID":"26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b","Type":"ContainerDied","Data":"bf182e8b14900be19e41ae59c11d8b892501377bfb169a2bd205b53741a86427"} Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.593538 4820 scope.go:117] "RemoveContainer" containerID="546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.594814 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.612455 4820 scope.go:117] "RemoveContainer" containerID="92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.620697 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.624024 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nswhf"] Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.626760 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nswhf"] Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.648563 4820 scope.go:117] "RemoveContainer" containerID="389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.657277 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.660594 4820 scope.go:117] "RemoveContainer" containerID="546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb" Feb 01 14:25:35 crc kubenswrapper[4820]: E0201 14:25:35.660917 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb\": container with ID starting with 546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb not found: ID does not exist" containerID="546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.660951 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb"} err="failed to get container status \"546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb\": rpc error: code = NotFound desc = could not find container \"546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb\": container with ID starting with 546a897db45d29d1635c844043bf330c00d1f04211c6b7eafaae435d4beeb2bb not found: ID does not exist" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.660973 4820 scope.go:117] "RemoveContainer" containerID="92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0" Feb 01 14:25:35 crc kubenswrapper[4820]: E0201 14:25:35.661201 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0\": container with ID starting with 92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0 not found: ID does not exist" containerID="92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.661220 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0"} err="failed to get container status \"92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0\": rpc error: code = NotFound desc = could not find container \"92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0\": container with ID starting with 92bc6c47291abb9be82ac203d2a25ca88f7cd8207cbdd4ee3780e3efcba072e0 not found: ID does not exist" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.661234 4820 scope.go:117] "RemoveContainer" containerID="389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5" Feb 01 14:25:35 crc kubenswrapper[4820]: E0201 14:25:35.661401 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5\": container with ID starting with 389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5 not found: ID does not exist" containerID="389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.661422 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5"} err="failed to get container status \"389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5\": rpc error: code = NotFound desc = could not find container \"389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5\": container with ID starting with 389fdc9ba957c4e12f780f69120960fce02ce01267122af3a5346d37c0089cb5 not found: ID does not exist" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.689264 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.748001 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.760774 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.771314 4820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.771528 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4" gracePeriod=5 Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.803940 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.827986 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 01 14:25:35 crc kubenswrapper[4820]: I0201 14:25:35.940343 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.012579 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.162602 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.227631 4820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.318532 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.387771 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.455486 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.456445 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.456539 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.495763 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.538224 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.595937 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" event={"ID":"06c0a93f-5833-4dc1-8221-2ffadbb60935","Type":"ContainerStarted","Data":"76ffe38bd4964dcd8d1e3d24b9c0fd860998e04ef6d5df8a44756b789eb41a7d"} Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.596276 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.601528 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.621448 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-85766c7959-kd4sr" podStartSLOduration=48.6214315 podStartE2EDuration="48.6214315s" podCreationTimestamp="2026-02-01 14:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:25:36.620005885 +0000 UTC m=+278.140372179" watchObservedRunningTime="2026-02-01 14:25:36.6214315 +0000 UTC m=+278.141797784" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.630621 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.673686 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.693301 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.717137 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.737408 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.784093 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.810705 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.821656 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 14:25:36 crc kubenswrapper[4820]: I0201 14:25:36.929854 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.069205 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.078476 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.178781 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.204949 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" path="/var/lib/kubelet/pods/26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b/volumes" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.264761 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.274266 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.287920 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.371683 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.398662 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.431032 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.530966 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.564310 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.597514 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.662314 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.688744 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.688960 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.814633 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 01 14:25:37 crc kubenswrapper[4820]: I0201 14:25:37.820849 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 01 14:25:38 crc kubenswrapper[4820]: I0201 14:25:38.018703 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 01 14:25:38 crc kubenswrapper[4820]: I0201 14:25:38.113791 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 01 14:25:38 crc kubenswrapper[4820]: I0201 14:25:38.230623 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 01 14:25:38 crc kubenswrapper[4820]: I0201 14:25:38.276290 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 01 14:25:38 crc kubenswrapper[4820]: I0201 14:25:38.333446 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 01 14:25:38 crc kubenswrapper[4820]: I0201 14:25:38.339003 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 01 14:25:38 crc kubenswrapper[4820]: I0201 14:25:38.486071 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 01 14:25:38 crc kubenswrapper[4820]: I0201 14:25:38.637049 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.086616 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.182849 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.275513 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.333494 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.333495 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.357629 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.569080 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.692626 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 01 14:25:39 crc kubenswrapper[4820]: I0201 14:25:39.925290 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 01 14:25:40 crc kubenswrapper[4820]: I0201 14:25:40.265815 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 01 14:25:40 crc kubenswrapper[4820]: I0201 14:25:40.320768 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 01 14:25:40 crc kubenswrapper[4820]: I0201 14:25:40.438606 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 01 14:25:40 crc kubenswrapper[4820]: I0201 14:25:40.457377 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 01 14:25:40 crc kubenswrapper[4820]: I0201 14:25:40.555998 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.228059 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.333464 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.333540 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.380741 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518136 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518244 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518283 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518427 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518510 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518488 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518604 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518843 4820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518904 4820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518922 4820 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.518937 4820 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.525320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.620158 4820 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.621544 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.621637 4820 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4" exitCode=137 Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.621690 4820 scope.go:117] "RemoveContainer" containerID="bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.621738 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.637072 4820 scope.go:117] "RemoveContainer" containerID="bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4" Feb 01 14:25:41 crc kubenswrapper[4820]: E0201 14:25:41.637457 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4\": container with ID starting with bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4 not found: ID does not exist" containerID="bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4" Feb 01 14:25:41 crc kubenswrapper[4820]: I0201 14:25:41.637493 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4"} err="failed to get container status \"bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4\": rpc error: code = NotFound desc = could not find container \"bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4\": container with ID starting with bc7b3ee409b1b4f640d8aadba1af2170a03ab14572a9b92cb055448869baf8e4 not found: ID does not exist" Feb 01 14:25:42 crc kubenswrapper[4820]: I0201 14:25:42.005615 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.205278 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.205584 4820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.219932 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.219976 4820 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="56ab36d0-02dd-46ed-86d4-0a4aede25c4a" Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.230394 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.230434 4820 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="56ab36d0-02dd-46ed-86d4-0a4aede25c4a" Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.348167 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.352111 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 01 14:25:43 crc kubenswrapper[4820]: I0201 14:25:43.376180 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 01 14:25:47 crc kubenswrapper[4820]: I0201 14:25:47.615626 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85895dc97d-n2g5s"] Feb 01 14:25:47 crc kubenswrapper[4820]: I0201 14:25:47.616223 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" podUID="7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" containerName="controller-manager" containerID="cri-o://4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c" gracePeriod=30 Feb 01 14:25:47 crc kubenswrapper[4820]: I0201 14:25:47.717981 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf"] Feb 01 14:25:47 crc kubenswrapper[4820]: I0201 14:25:47.718466 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" containerID="cri-o://ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e" gracePeriod=30 Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.038924 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.083795 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5f4c4d8458-lx9hf_c859d947-1117-42ca-b3d8-605bb491d4b3/route-controller-manager/0.log" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.083893 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.107169 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-config" (OuterVolumeSpecName: "config") pod "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" (UID: "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.107238 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-config\") pod \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.107305 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-serving-cert\") pod \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.107339 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c859d947-1117-42ca-b3d8-605bb491d4b3-serving-cert\") pod \"c859d947-1117-42ca-b3d8-605bb491d4b3\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.108203 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-proxy-ca-bundles\") pod \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.108231 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-config\") pod \"c859d947-1117-42ca-b3d8-605bb491d4b3\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.108262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-client-ca\") pod \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.108284 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-client-ca\") pod \"c859d947-1117-42ca-b3d8-605bb491d4b3\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.108321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pntq\" (UniqueName: \"kubernetes.io/projected/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-kube-api-access-2pntq\") pod \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\" (UID: \"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.108358 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj46s\" (UniqueName: \"kubernetes.io/projected/c859d947-1117-42ca-b3d8-605bb491d4b3-kube-api-access-mj46s\") pod \"c859d947-1117-42ca-b3d8-605bb491d4b3\" (UID: \"c859d947-1117-42ca-b3d8-605bb491d4b3\") " Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.108693 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.109400 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" (UID: "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.109765 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" (UID: "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.110238 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-config" (OuterVolumeSpecName: "config") pod "c859d947-1117-42ca-b3d8-605bb491d4b3" (UID: "c859d947-1117-42ca-b3d8-605bb491d4b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.110699 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "c859d947-1117-42ca-b3d8-605bb491d4b3" (UID: "c859d947-1117-42ca-b3d8-605bb491d4b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.112430 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" (UID: "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.112569 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-kube-api-access-2pntq" (OuterVolumeSpecName: "kube-api-access-2pntq") pod "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" (UID: "7b6424e9-d2fd-4597-9a72-4cb2adf24b9f"). InnerVolumeSpecName "kube-api-access-2pntq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.113293 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c859d947-1117-42ca-b3d8-605bb491d4b3-kube-api-access-mj46s" (OuterVolumeSpecName: "kube-api-access-mj46s") pod "c859d947-1117-42ca-b3d8-605bb491d4b3" (UID: "c859d947-1117-42ca-b3d8-605bb491d4b3"). InnerVolumeSpecName "kube-api-access-mj46s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.114554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c859d947-1117-42ca-b3d8-605bb491d4b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c859d947-1117-42ca-b3d8-605bb491d4b3" (UID: "c859d947-1117-42ca-b3d8-605bb491d4b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.209747 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj46s\" (UniqueName: \"kubernetes.io/projected/c859d947-1117-42ca-b3d8-605bb491d4b3-kube-api-access-mj46s\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.209809 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.209819 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c859d947-1117-42ca-b3d8-605bb491d4b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.209828 4820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.209837 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.209845 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.209852 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c859d947-1117-42ca-b3d8-605bb491d4b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.209860 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pntq\" (UniqueName: \"kubernetes.io/projected/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f-kube-api-access-2pntq\") on node \"crc\" DevicePath \"\"" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651410 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65fdc97cbf-mb497"] Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.651662 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerName="extract-utilities" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651677 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerName="extract-utilities" Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.651689 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651697 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.651708 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651717 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.651728 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerName="extract-content" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651735 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerName="extract-content" Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.651747 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" containerName="controller-manager" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651754 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" containerName="controller-manager" Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.651767 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerName="registry-server" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651774 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerName="registry-server" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651924 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651942 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f893e7-b0f2-4d0e-abfe-ecb9fd67fe1b" containerName="registry-server" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651955 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" containerName="controller-manager" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.651965 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.652429 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.657460 4820 generic.go:334] "Generic (PLEG): container finished" podID="7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" containerID="4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c" exitCode=0 Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.657521 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.657559 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" event={"ID":"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f","Type":"ContainerDied","Data":"4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c"} Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.657597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85895dc97d-n2g5s" event={"ID":"7b6424e9-d2fd-4597-9a72-4cb2adf24b9f","Type":"ContainerDied","Data":"6b43749d617096bbb8fa4cc5fb3d97633c969ec7cc1d696c7b7f1bd812f5f1ce"} Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.657614 4820 scope.go:117] "RemoveContainer" containerID="4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.659754 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5f4c4d8458-lx9hf_c859d947-1117-42ca-b3d8-605bb491d4b3/route-controller-manager/0.log" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.659812 4820 generic.go:334] "Generic (PLEG): container finished" podID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerID="ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e" exitCode=0 Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.659842 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" event={"ID":"c859d947-1117-42ca-b3d8-605bb491d4b3","Type":"ContainerDied","Data":"ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e"} Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.659874 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" event={"ID":"c859d947-1117-42ca-b3d8-605bb491d4b3","Type":"ContainerDied","Data":"c238bc550ea29c29e2704284b173142cd9531540b5115be323b658233bae379b"} Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.659887 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.664392 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65fdc97cbf-mb497"] Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.686102 4820 scope.go:117] "RemoveContainer" containerID="4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c" Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.687508 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c\": container with ID starting with 4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c not found: ID does not exist" containerID="4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.687555 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c"} err="failed to get container status \"4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c\": rpc error: code = NotFound desc = could not find container \"4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c\": container with ID starting with 4bd693e2f820d7065695cd83227bdef269369f5cacffa130e044a0b894478e3c not found: ID does not exist" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.687587 4820 scope.go:117] "RemoveContainer" containerID="ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.707603 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85895dc97d-n2g5s"] Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.708603 4820 scope.go:117] "RemoveContainer" containerID="80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.713804 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85895dc97d-n2g5s"] Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.715592 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-client-ca\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.715678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd86926d-8828-44ce-9b83-e40c1e92d1f1-serving-cert\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.715707 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvt6f\" (UniqueName: \"kubernetes.io/projected/dd86926d-8828-44ce-9b83-e40c1e92d1f1-kube-api-access-kvt6f\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.715838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-config\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.716034 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-proxy-ca-bundles\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.718667 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf"] Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.723495 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4c4d8458-lx9hf"] Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.735100 4820 scope.go:117] "RemoveContainer" containerID="ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e" Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.735507 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e\": container with ID starting with ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e not found: ID does not exist" containerID="ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.735567 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e"} err="failed to get container status \"ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e\": rpc error: code = NotFound desc = could not find container \"ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e\": container with ID starting with ded517109d3fcf9bce38f44d4f49533f92b07326a615497ec038f2bdf04a197e not found: ID does not exist" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.735596 4820 scope.go:117] "RemoveContainer" containerID="80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af" Feb 01 14:25:48 crc kubenswrapper[4820]: E0201 14:25:48.735907 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af\": container with ID starting with 80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af not found: ID does not exist" containerID="80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.735968 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af"} err="failed to get container status \"80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af\": rpc error: code = NotFound desc = could not find container \"80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af\": container with ID starting with 80e560260aeedfedc4e56691de5c32b6cfb8176aa428ed4bb69f38c040e731af not found: ID does not exist" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.817406 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd86926d-8828-44ce-9b83-e40c1e92d1f1-serving-cert\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.817472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvt6f\" (UniqueName: \"kubernetes.io/projected/dd86926d-8828-44ce-9b83-e40c1e92d1f1-kube-api-access-kvt6f\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.817504 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-config\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.817566 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-proxy-ca-bundles\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.817603 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-client-ca\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.818552 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-client-ca\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.818744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-config\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.818813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd86926d-8828-44ce-9b83-e40c1e92d1f1-proxy-ca-bundles\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.821844 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd86926d-8828-44ce-9b83-e40c1e92d1f1-serving-cert\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.838132 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvt6f\" (UniqueName: \"kubernetes.io/projected/dd86926d-8828-44ce-9b83-e40c1e92d1f1-kube-api-access-kvt6f\") pod \"controller-manager-65fdc97cbf-mb497\" (UID: \"dd86926d-8828-44ce-9b83-e40c1e92d1f1\") " pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:48 crc kubenswrapper[4820]: I0201 14:25:48.988529 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.205285 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6424e9-d2fd-4597-9a72-4cb2adf24b9f" path="/var/lib/kubelet/pods/7b6424e9-d2fd-4597-9a72-4cb2adf24b9f/volumes" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.205775 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" path="/var/lib/kubelet/pods/c859d947-1117-42ca-b3d8-605bb491d4b3/volumes" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.361982 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65fdc97cbf-mb497"] Feb 01 14:25:49 crc kubenswrapper[4820]: W0201 14:25:49.369079 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd86926d_8828_44ce_9b83_e40c1e92d1f1.slice/crio-361c2da9c429135c31a4dc5b488d63c60ab621c3458bcfcc71523ba73977c645 WatchSource:0}: Error finding container 361c2da9c429135c31a4dc5b488d63c60ab621c3458bcfcc71523ba73977c645: Status 404 returned error can't find the container with id 361c2da9c429135c31a4dc5b488d63c60ab621c3458bcfcc71523ba73977c645 Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.653594 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4"] Feb 01 14:25:49 crc kubenswrapper[4820]: E0201 14:25:49.654456 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.654477 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.654610 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c859d947-1117-42ca-b3d8-605bb491d4b3" containerName="route-controller-manager" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.656187 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.659907 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4"] Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.668554 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.668701 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.668858 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.668941 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.668986 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.669155 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.674296 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" event={"ID":"dd86926d-8828-44ce-9b83-e40c1e92d1f1","Type":"ContainerStarted","Data":"7b302df9c87f9dd583e4d0d580f89476a039d3a7cf4bfd8621a2dc79a00d4508"} Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.674356 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" event={"ID":"dd86926d-8828-44ce-9b83-e40c1e92d1f1","Type":"ContainerStarted","Data":"361c2da9c429135c31a4dc5b488d63c60ab621c3458bcfcc71523ba73977c645"} Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.674814 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.690171 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.711554 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65fdc97cbf-mb497" podStartSLOduration=2.7115241020000003 podStartE2EDuration="2.711524102s" podCreationTimestamp="2026-02-01 14:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:25:49.707618827 +0000 UTC m=+291.227985121" watchObservedRunningTime="2026-02-01 14:25:49.711524102 +0000 UTC m=+291.231890386" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.730312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-config\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.730416 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-client-ca\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.730480 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k9bg\" (UniqueName: \"kubernetes.io/projected/550a77ce-b347-426f-9a06-7a74d604698a-kube-api-access-8k9bg\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.730506 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550a77ce-b347-426f-9a06-7a74d604698a-serving-cert\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.831719 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k9bg\" (UniqueName: \"kubernetes.io/projected/550a77ce-b347-426f-9a06-7a74d604698a-kube-api-access-8k9bg\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.831794 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550a77ce-b347-426f-9a06-7a74d604698a-serving-cert\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.831849 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-config\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.831961 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-client-ca\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.832859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-client-ca\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.833161 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-config\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.841256 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550a77ce-b347-426f-9a06-7a74d604698a-serving-cert\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.848595 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k9bg\" (UniqueName: \"kubernetes.io/projected/550a77ce-b347-426f-9a06-7a74d604698a-kube-api-access-8k9bg\") pod \"route-controller-manager-54fd65d85f-qgpr4\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:49 crc kubenswrapper[4820]: I0201 14:25:49.975804 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:50 crc kubenswrapper[4820]: I0201 14:25:50.174775 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4"] Feb 01 14:25:50 crc kubenswrapper[4820]: I0201 14:25:50.683230 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" event={"ID":"550a77ce-b347-426f-9a06-7a74d604698a","Type":"ContainerStarted","Data":"a7b8be030a8b2a3f7187e15c25904cb0e4b169c4ab18b60ced9bee3a09218327"} Feb 01 14:25:50 crc kubenswrapper[4820]: I0201 14:25:50.683292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" event={"ID":"550a77ce-b347-426f-9a06-7a74d604698a","Type":"ContainerStarted","Data":"17c1f2f5bf59d1f1a369e84639092f47555ac275e284e2f28f69936b49d04f6d"} Feb 01 14:25:50 crc kubenswrapper[4820]: I0201 14:25:50.703383 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" podStartSLOduration=3.703356774 podStartE2EDuration="3.703356774s" podCreationTimestamp="2026-02-01 14:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:25:50.70037694 +0000 UTC m=+292.220743234" watchObservedRunningTime="2026-02-01 14:25:50.703356774 +0000 UTC m=+292.223723058" Feb 01 14:25:51 crc kubenswrapper[4820]: I0201 14:25:51.690585 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:51 crc kubenswrapper[4820]: I0201 14:25:51.695785 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:25:57 crc kubenswrapper[4820]: I0201 14:25:57.721717 4820 generic.go:334] "Generic (PLEG): container finished" podID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerID="b346df61d5c736e0515c1075d0d1bbb5fcae41532d5ff6c94b78ff3f5445cd50" exitCode=0 Feb 01 14:25:57 crc kubenswrapper[4820]: I0201 14:25:57.721943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" event={"ID":"973ec7e3-13c9-47b8-b10e-5bff2619f164","Type":"ContainerDied","Data":"b346df61d5c736e0515c1075d0d1bbb5fcae41532d5ff6c94b78ff3f5445cd50"} Feb 01 14:25:57 crc kubenswrapper[4820]: I0201 14:25:57.723133 4820 scope.go:117] "RemoveContainer" containerID="b346df61d5c736e0515c1075d0d1bbb5fcae41532d5ff6c94b78ff3f5445cd50" Feb 01 14:25:58 crc kubenswrapper[4820]: I0201 14:25:58.730362 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" event={"ID":"973ec7e3-13c9-47b8-b10e-5bff2619f164","Type":"ContainerStarted","Data":"024ece7bc787504b657797c37d098a64c542f7a04a66bc7044adfb8e966d94d7"} Feb 01 14:25:58 crc kubenswrapper[4820]: I0201 14:25:58.731082 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:25:58 crc kubenswrapper[4820]: I0201 14:25:58.733472 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:25:58 crc kubenswrapper[4820]: I0201 14:25:58.977799 4820 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 01 14:26:07 crc kubenswrapper[4820]: I0201 14:26:07.644743 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4"] Feb 01 14:26:07 crc kubenswrapper[4820]: I0201 14:26:07.645714 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" podUID="550a77ce-b347-426f-9a06-7a74d604698a" containerName="route-controller-manager" containerID="cri-o://a7b8be030a8b2a3f7187e15c25904cb0e4b169c4ab18b60ced9bee3a09218327" gracePeriod=30 Feb 01 14:26:07 crc kubenswrapper[4820]: I0201 14:26:07.779249 4820 generic.go:334] "Generic (PLEG): container finished" podID="550a77ce-b347-426f-9a06-7a74d604698a" containerID="a7b8be030a8b2a3f7187e15c25904cb0e4b169c4ab18b60ced9bee3a09218327" exitCode=0 Feb 01 14:26:07 crc kubenswrapper[4820]: I0201 14:26:07.779291 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" event={"ID":"550a77ce-b347-426f-9a06-7a74d604698a","Type":"ContainerDied","Data":"a7b8be030a8b2a3f7187e15c25904cb0e4b169c4ab18b60ced9bee3a09218327"} Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.122644 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.177000 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-config\") pod \"550a77ce-b347-426f-9a06-7a74d604698a\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.177068 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550a77ce-b347-426f-9a06-7a74d604698a-serving-cert\") pod \"550a77ce-b347-426f-9a06-7a74d604698a\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.177131 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-client-ca\") pod \"550a77ce-b347-426f-9a06-7a74d604698a\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.177191 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k9bg\" (UniqueName: \"kubernetes.io/projected/550a77ce-b347-426f-9a06-7a74d604698a-kube-api-access-8k9bg\") pod \"550a77ce-b347-426f-9a06-7a74d604698a\" (UID: \"550a77ce-b347-426f-9a06-7a74d604698a\") " Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.177854 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-client-ca" (OuterVolumeSpecName: "client-ca") pod "550a77ce-b347-426f-9a06-7a74d604698a" (UID: "550a77ce-b347-426f-9a06-7a74d604698a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.177967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-config" (OuterVolumeSpecName: "config") pod "550a77ce-b347-426f-9a06-7a74d604698a" (UID: "550a77ce-b347-426f-9a06-7a74d604698a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.184461 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550a77ce-b347-426f-9a06-7a74d604698a-kube-api-access-8k9bg" (OuterVolumeSpecName: "kube-api-access-8k9bg") pod "550a77ce-b347-426f-9a06-7a74d604698a" (UID: "550a77ce-b347-426f-9a06-7a74d604698a"). InnerVolumeSpecName "kube-api-access-8k9bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.184983 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550a77ce-b347-426f-9a06-7a74d604698a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "550a77ce-b347-426f-9a06-7a74d604698a" (UID: "550a77ce-b347-426f-9a06-7a74d604698a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.277998 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.278022 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550a77ce-b347-426f-9a06-7a74d604698a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.278031 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550a77ce-b347-426f-9a06-7a74d604698a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.278042 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k9bg\" (UniqueName: \"kubernetes.io/projected/550a77ce-b347-426f-9a06-7a74d604698a-kube-api-access-8k9bg\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.786655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" event={"ID":"550a77ce-b347-426f-9a06-7a74d604698a","Type":"ContainerDied","Data":"17c1f2f5bf59d1f1a369e84639092f47555ac275e284e2f28f69936b49d04f6d"} Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.786702 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.786713 4820 scope.go:117] "RemoveContainer" containerID="a7b8be030a8b2a3f7187e15c25904cb0e4b169c4ab18b60ced9bee3a09218327" Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.814293 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4"] Feb 01 14:26:08 crc kubenswrapper[4820]: I0201 14:26:08.820246 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-qgpr4"] Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.208854 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550a77ce-b347-426f-9a06-7a74d604698a" path="/var/lib/kubelet/pods/550a77ce-b347-426f-9a06-7a74d604698a/volumes" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.664327 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6"] Feb 01 14:26:09 crc kubenswrapper[4820]: E0201 14:26:09.664741 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550a77ce-b347-426f-9a06-7a74d604698a" containerName="route-controller-manager" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.664767 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="550a77ce-b347-426f-9a06-7a74d604698a" containerName="route-controller-manager" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.664958 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="550a77ce-b347-426f-9a06-7a74d604698a" containerName="route-controller-manager" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.665412 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.667744 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.667805 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.667823 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.667825 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.667932 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.669820 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.682531 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6"] Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.696559 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-client-ca\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.696633 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7492\" (UniqueName: \"kubernetes.io/projected/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-kube-api-access-f7492\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.696849 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-config\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.696994 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-serving-cert\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.800097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7492\" (UniqueName: \"kubernetes.io/projected/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-kube-api-access-f7492\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.800231 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-config\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.800320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-serving-cert\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.800360 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-client-ca\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.801512 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-client-ca\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.802852 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-config\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.811291 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-serving-cert\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.826813 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7492\" (UniqueName: \"kubernetes.io/projected/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-kube-api-access-f7492\") pod \"route-controller-manager-785564f44b-s64b6\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:09 crc kubenswrapper[4820]: I0201 14:26:09.978156 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:10 crc kubenswrapper[4820]: I0201 14:26:10.408581 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6"] Feb 01 14:26:10 crc kubenswrapper[4820]: W0201 14:26:10.409446 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ce19335_f447_4aec_8c8b_bb1896a6ae3f.slice/crio-e7286323a58671fa0fdd100ccf1841a76e833c666a6f385a22e4530a29ac79fc WatchSource:0}: Error finding container e7286323a58671fa0fdd100ccf1841a76e833c666a6f385a22e4530a29ac79fc: Status 404 returned error can't find the container with id e7286323a58671fa0fdd100ccf1841a76e833c666a6f385a22e4530a29ac79fc Feb 01 14:26:10 crc kubenswrapper[4820]: I0201 14:26:10.798942 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" event={"ID":"4ce19335-f447-4aec-8c8b-bb1896a6ae3f","Type":"ContainerStarted","Data":"957a0e7c2aec2d11f7c0f2203199175e05301e613a811e39da646835cab52c54"} Feb 01 14:26:10 crc kubenswrapper[4820]: I0201 14:26:10.799261 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" event={"ID":"4ce19335-f447-4aec-8c8b-bb1896a6ae3f","Type":"ContainerStarted","Data":"e7286323a58671fa0fdd100ccf1841a76e833c666a6f385a22e4530a29ac79fc"} Feb 01 14:26:10 crc kubenswrapper[4820]: I0201 14:26:10.799282 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:11 crc kubenswrapper[4820]: I0201 14:26:11.420629 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:11 crc kubenswrapper[4820]: I0201 14:26:11.437856 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" podStartSLOduration=4.437835351 podStartE2EDuration="4.437835351s" podCreationTimestamp="2026-02-01 14:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:26:10.816471113 +0000 UTC m=+312.336837467" watchObservedRunningTime="2026-02-01 14:26:11.437835351 +0000 UTC m=+312.958201635" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.094718 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7g28"] Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.099836 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.114099 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7g28"] Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.259647 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt85x\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-kube-api-access-xt85x\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.259693 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d0f7c20b-4b41-40a2-9058-d069793d69d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.259720 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.259961 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d0f7c20b-4b41-40a2-9058-d069793d69d8-registry-certificates\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.260000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d0f7c20b-4b41-40a2-9058-d069793d69d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.260026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-registry-tls\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.260137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-bound-sa-token\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.260191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0f7c20b-4b41-40a2-9058-d069793d69d8-trusted-ca\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.285247 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.361004 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-registry-tls\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.361071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-bound-sa-token\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.361099 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0f7c20b-4b41-40a2-9058-d069793d69d8-trusted-ca\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.361139 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d0f7c20b-4b41-40a2-9058-d069793d69d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.361161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt85x\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-kube-api-access-xt85x\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.361192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d0f7c20b-4b41-40a2-9058-d069793d69d8-registry-certificates\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.361226 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d0f7c20b-4b41-40a2-9058-d069793d69d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.361664 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d0f7c20b-4b41-40a2-9058-d069793d69d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.362936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d0f7c20b-4b41-40a2-9058-d069793d69d8-registry-certificates\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.363027 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0f7c20b-4b41-40a2-9058-d069793d69d8-trusted-ca\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.368112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-registry-tls\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.372736 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d0f7c20b-4b41-40a2-9058-d069793d69d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.382163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-bound-sa-token\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.384576 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt85x\" (UniqueName: \"kubernetes.io/projected/d0f7c20b-4b41-40a2-9058-d069793d69d8-kube-api-access-xt85x\") pod \"image-registry-66df7c8f76-k7g28\" (UID: \"d0f7c20b-4b41-40a2-9058-d069793d69d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.419524 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:23 crc kubenswrapper[4820]: I0201 14:26:23.876449 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7g28"] Feb 01 14:26:23 crc kubenswrapper[4820]: W0201 14:26:23.883398 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0f7c20b_4b41_40a2_9058_d069793d69d8.slice/crio-92b3ed935cc2600b48478fd2800ebaf66cb2953ce4e0effcb9545bbbc8855405 WatchSource:0}: Error finding container 92b3ed935cc2600b48478fd2800ebaf66cb2953ce4e0effcb9545bbbc8855405: Status 404 returned error can't find the container with id 92b3ed935cc2600b48478fd2800ebaf66cb2953ce4e0effcb9545bbbc8855405 Feb 01 14:26:24 crc kubenswrapper[4820]: I0201 14:26:24.865727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" event={"ID":"d0f7c20b-4b41-40a2-9058-d069793d69d8","Type":"ContainerStarted","Data":"8284ef820ffe7ae3e0038f079dc5de22de7fbe760c27547816e1c5e90e2d9245"} Feb 01 14:26:24 crc kubenswrapper[4820]: I0201 14:26:24.866023 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:24 crc kubenswrapper[4820]: I0201 14:26:24.866034 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" event={"ID":"d0f7c20b-4b41-40a2-9058-d069793d69d8","Type":"ContainerStarted","Data":"92b3ed935cc2600b48478fd2800ebaf66cb2953ce4e0effcb9545bbbc8855405"} Feb 01 14:26:24 crc kubenswrapper[4820]: I0201 14:26:24.889113 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" podStartSLOduration=1.889092395 podStartE2EDuration="1.889092395s" podCreationTimestamp="2026-02-01 14:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:26:24.88405566 +0000 UTC m=+326.404421944" watchObservedRunningTime="2026-02-01 14:26:24.889092395 +0000 UTC m=+326.409458679" Feb 01 14:26:27 crc kubenswrapper[4820]: I0201 14:26:27.609437 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6"] Feb 01 14:26:27 crc kubenswrapper[4820]: I0201 14:26:27.609639 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" podUID="4ce19335-f447-4aec-8c8b-bb1896a6ae3f" containerName="route-controller-manager" containerID="cri-o://957a0e7c2aec2d11f7c0f2203199175e05301e613a811e39da646835cab52c54" gracePeriod=30 Feb 01 14:26:27 crc kubenswrapper[4820]: I0201 14:26:27.887758 4820 generic.go:334] "Generic (PLEG): container finished" podID="4ce19335-f447-4aec-8c8b-bb1896a6ae3f" containerID="957a0e7c2aec2d11f7c0f2203199175e05301e613a811e39da646835cab52c54" exitCode=0 Feb 01 14:26:27 crc kubenswrapper[4820]: I0201 14:26:27.887851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" event={"ID":"4ce19335-f447-4aec-8c8b-bb1896a6ae3f","Type":"ContainerDied","Data":"957a0e7c2aec2d11f7c0f2203199175e05301e613a811e39da646835cab52c54"} Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.016029 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.117926 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7492\" (UniqueName: \"kubernetes.io/projected/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-kube-api-access-f7492\") pod \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.117972 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-config\") pod \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.118058 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-client-ca\") pod \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.118081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-serving-cert\") pod \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\" (UID: \"4ce19335-f447-4aec-8c8b-bb1896a6ae3f\") " Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.118963 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-config" (OuterVolumeSpecName: "config") pod "4ce19335-f447-4aec-8c8b-bb1896a6ae3f" (UID: "4ce19335-f447-4aec-8c8b-bb1896a6ae3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.119025 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ce19335-f447-4aec-8c8b-bb1896a6ae3f" (UID: "4ce19335-f447-4aec-8c8b-bb1896a6ae3f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.131187 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-kube-api-access-f7492" (OuterVolumeSpecName: "kube-api-access-f7492") pod "4ce19335-f447-4aec-8c8b-bb1896a6ae3f" (UID: "4ce19335-f447-4aec-8c8b-bb1896a6ae3f"). InnerVolumeSpecName "kube-api-access-f7492". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.131311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ce19335-f447-4aec-8c8b-bb1896a6ae3f" (UID: "4ce19335-f447-4aec-8c8b-bb1896a6ae3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.219586 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7492\" (UniqueName: \"kubernetes.io/projected/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-kube-api-access-f7492\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.219622 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.219634 4820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.219642 4820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce19335-f447-4aec-8c8b-bb1896a6ae3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.694061 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj"] Feb 01 14:26:28 crc kubenswrapper[4820]: E0201 14:26:28.694467 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce19335-f447-4aec-8c8b-bb1896a6ae3f" containerName="route-controller-manager" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.694494 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce19335-f447-4aec-8c8b-bb1896a6ae3f" containerName="route-controller-manager" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.694708 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce19335-f447-4aec-8c8b-bb1896a6ae3f" containerName="route-controller-manager" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.695475 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.700652 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj"] Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.826381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmjr\" (UniqueName: \"kubernetes.io/projected/ed0c985a-8b5b-4545-8ce7-54ad62872f77-kube-api-access-hkmjr\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.826500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0c985a-8b5b-4545-8ce7-54ad62872f77-config\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.826589 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0c985a-8b5b-4545-8ce7-54ad62872f77-serving-cert\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.826643 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0c985a-8b5b-4545-8ce7-54ad62872f77-client-ca\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.894762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" event={"ID":"4ce19335-f447-4aec-8c8b-bb1896a6ae3f","Type":"ContainerDied","Data":"e7286323a58671fa0fdd100ccf1841a76e833c666a6f385a22e4530a29ac79fc"} Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.894840 4820 scope.go:117] "RemoveContainer" containerID="957a0e7c2aec2d11f7c0f2203199175e05301e613a811e39da646835cab52c54" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.894854 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.928436 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkmjr\" (UniqueName: \"kubernetes.io/projected/ed0c985a-8b5b-4545-8ce7-54ad62872f77-kube-api-access-hkmjr\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.928536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0c985a-8b5b-4545-8ce7-54ad62872f77-config\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.928591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0c985a-8b5b-4545-8ce7-54ad62872f77-serving-cert\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.928631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0c985a-8b5b-4545-8ce7-54ad62872f77-client-ca\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.930174 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0c985a-8b5b-4545-8ce7-54ad62872f77-config\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.931281 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6"] Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.931439 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0c985a-8b5b-4545-8ce7-54ad62872f77-client-ca\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.936797 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785564f44b-s64b6"] Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.938598 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0c985a-8b5b-4545-8ce7-54ad62872f77-serving-cert\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:28 crc kubenswrapper[4820]: I0201 14:26:28.945620 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkmjr\" (UniqueName: \"kubernetes.io/projected/ed0c985a-8b5b-4545-8ce7-54ad62872f77-kube-api-access-hkmjr\") pod \"route-controller-manager-54fd65d85f-hgmmj\" (UID: \"ed0c985a-8b5b-4545-8ce7-54ad62872f77\") " pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:29 crc kubenswrapper[4820]: I0201 14:26:29.011550 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:29 crc kubenswrapper[4820]: I0201 14:26:29.209434 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce19335-f447-4aec-8c8b-bb1896a6ae3f" path="/var/lib/kubelet/pods/4ce19335-f447-4aec-8c8b-bb1896a6ae3f/volumes" Feb 01 14:26:29 crc kubenswrapper[4820]: I0201 14:26:29.395072 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj"] Feb 01 14:26:29 crc kubenswrapper[4820]: W0201 14:26:29.399713 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded0c985a_8b5b_4545_8ce7_54ad62872f77.slice/crio-2a2fb21edf3be173895b9f430f77a4f97b8add7c1d2890288042047506bdb02b WatchSource:0}: Error finding container 2a2fb21edf3be173895b9f430f77a4f97b8add7c1d2890288042047506bdb02b: Status 404 returned error can't find the container with id 2a2fb21edf3be173895b9f430f77a4f97b8add7c1d2890288042047506bdb02b Feb 01 14:26:29 crc kubenswrapper[4820]: I0201 14:26:29.901722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" event={"ID":"ed0c985a-8b5b-4545-8ce7-54ad62872f77","Type":"ContainerStarted","Data":"9e78562c7263137ec2de97e15449486b4bb7495f105eb3aa07a6915324b437b4"} Feb 01 14:26:29 crc kubenswrapper[4820]: I0201 14:26:29.901765 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" event={"ID":"ed0c985a-8b5b-4545-8ce7-54ad62872f77","Type":"ContainerStarted","Data":"2a2fb21edf3be173895b9f430f77a4f97b8add7c1d2890288042047506bdb02b"} Feb 01 14:26:29 crc kubenswrapper[4820]: I0201 14:26:29.901898 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:29 crc kubenswrapper[4820]: I0201 14:26:29.920113 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" podStartSLOduration=2.9200945149999997 podStartE2EDuration="2.920094515s" podCreationTimestamp="2026-02-01 14:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:26:29.919930121 +0000 UTC m=+331.440296425" watchObservedRunningTime="2026-02-01 14:26:29.920094515 +0000 UTC m=+331.440460799" Feb 01 14:26:30 crc kubenswrapper[4820]: I0201 14:26:30.000221 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54fd65d85f-hgmmj" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.589627 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5dnc"] Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.590726 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z5dnc" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerName="registry-server" containerID="cri-o://65970a680622174be9829cba1fb57e0dd75afd616f0bd1b0f162d57af0062efd" gracePeriod=30 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.602321 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6kqz"] Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.603017 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6kqz" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerName="registry-server" containerID="cri-o://7f2a1ae2c4f9950cd8f0aeafd780dbbb7a4796dc9aacc7ca394e0a0e084da54e" gracePeriod=30 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.611965 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgsmb"] Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.612241 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" containerID="cri-o://024ece7bc787504b657797c37d098a64c542f7a04a66bc7044adfb8e966d94d7" gracePeriod=30 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.615228 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkfmb"] Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.615481 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vkfmb" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerName="registry-server" containerID="cri-o://4fefd273802b187a6cf47320b83f79969469dd466c1620fd72f160c512720fbe" gracePeriod=30 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.625719 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jn7nh"] Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.626795 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.634085 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jd9wc"] Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.634419 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jd9wc" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerName="registry-server" containerID="cri-o://71b634e1b2b5ce2c0ef7e0aec1ab54abc6a4acb171985584224708847580c2fb" gracePeriod=30 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.643010 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jn7nh"] Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.767022 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d0c59a0-904e-4386-9a3b-7980f0b1e697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.767102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d0c59a0-904e-4386-9a3b-7980f0b1e697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.767290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ww49\" (UniqueName: \"kubernetes.io/projected/0d0c59a0-904e-4386-9a3b-7980f0b1e697-kube-api-access-5ww49\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.869127 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ww49\" (UniqueName: \"kubernetes.io/projected/0d0c59a0-904e-4386-9a3b-7980f0b1e697-kube-api-access-5ww49\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.869222 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d0c59a0-904e-4386-9a3b-7980f0b1e697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.869246 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d0c59a0-904e-4386-9a3b-7980f0b1e697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.870471 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d0c59a0-904e-4386-9a3b-7980f0b1e697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.876602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d0c59a0-904e-4386-9a3b-7980f0b1e697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.886431 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ww49\" (UniqueName: \"kubernetes.io/projected/0d0c59a0-904e-4386-9a3b-7980f0b1e697-kube-api-access-5ww49\") pod \"marketplace-operator-79b997595-jn7nh\" (UID: \"0d0c59a0-904e-4386-9a3b-7980f0b1e697\") " pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.962914 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.968852 4820 generic.go:334] "Generic (PLEG): container finished" podID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerID="024ece7bc787504b657797c37d098a64c542f7a04a66bc7044adfb8e966d94d7" exitCode=0 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.968931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" event={"ID":"973ec7e3-13c9-47b8-b10e-5bff2619f164","Type":"ContainerDied","Data":"024ece7bc787504b657797c37d098a64c542f7a04a66bc7044adfb8e966d94d7"} Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.968964 4820 scope.go:117] "RemoveContainer" containerID="b346df61d5c736e0515c1075d0d1bbb5fcae41532d5ff6c94b78ff3f5445cd50" Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.972151 4820 generic.go:334] "Generic (PLEG): container finished" podID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerID="4fefd273802b187a6cf47320b83f79969469dd466c1620fd72f160c512720fbe" exitCode=0 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.972195 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkfmb" event={"ID":"f73b6fb9-8420-42fe-9b3d-42d17a204743","Type":"ContainerDied","Data":"4fefd273802b187a6cf47320b83f79969469dd466c1620fd72f160c512720fbe"} Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.975299 4820 generic.go:334] "Generic (PLEG): container finished" podID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerID="65970a680622174be9829cba1fb57e0dd75afd616f0bd1b0f162d57af0062efd" exitCode=0 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.975460 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5dnc" event={"ID":"5cd3df7b-e150-490b-9785-ccfab6b264b5","Type":"ContainerDied","Data":"65970a680622174be9829cba1fb57e0dd75afd616f0bd1b0f162d57af0062efd"} Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.977582 4820 generic.go:334] "Generic (PLEG): container finished" podID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerID="7f2a1ae2c4f9950cd8f0aeafd780dbbb7a4796dc9aacc7ca394e0a0e084da54e" exitCode=0 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.977632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6kqz" event={"ID":"3dee68b0-a47b-49fd-a889-7bf3bc58c380","Type":"ContainerDied","Data":"7f2a1ae2c4f9950cd8f0aeafd780dbbb7a4796dc9aacc7ca394e0a0e084da54e"} Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.979541 4820 generic.go:334] "Generic (PLEG): container finished" podID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerID="71b634e1b2b5ce2c0ef7e0aec1ab54abc6a4acb171985584224708847580c2fb" exitCode=0 Feb 01 14:26:39 crc kubenswrapper[4820]: I0201 14:26:39.979565 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jd9wc" event={"ID":"df4ea2d8-4be0-4e30-b48a-484a93d725b0","Type":"ContainerDied","Data":"71b634e1b2b5ce2c0ef7e0aec1ab54abc6a4acb171985584224708847580c2fb"} Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.068297 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.124438 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.136671 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.168033 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.177588 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-catalog-content\") pod \"5cd3df7b-e150-490b-9785-ccfab6b264b5\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.177737 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46s25\" (UniqueName: \"kubernetes.io/projected/5cd3df7b-e150-490b-9785-ccfab6b264b5-kube-api-access-46s25\") pod \"5cd3df7b-e150-490b-9785-ccfab6b264b5\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.177826 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-utilities\") pod \"5cd3df7b-e150-490b-9785-ccfab6b264b5\" (UID: \"5cd3df7b-e150-490b-9785-ccfab6b264b5\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.179492 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-utilities" (OuterVolumeSpecName: "utilities") pod "5cd3df7b-e150-490b-9785-ccfab6b264b5" (UID: "5cd3df7b-e150-490b-9785-ccfab6b264b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.187671 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd3df7b-e150-490b-9785-ccfab6b264b5-kube-api-access-46s25" (OuterVolumeSpecName: "kube-api-access-46s25") pod "5cd3df7b-e150-490b-9785-ccfab6b264b5" (UID: "5cd3df7b-e150-490b-9785-ccfab6b264b5"). InnerVolumeSpecName "kube-api-access-46s25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.187728 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.247407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cd3df7b-e150-490b-9785-ccfab6b264b5" (UID: "5cd3df7b-e150-490b-9785-ccfab6b264b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280314 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-operator-metrics\") pod \"973ec7e3-13c9-47b8-b10e-5bff2619f164\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280366 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58dc7\" (UniqueName: \"kubernetes.io/projected/973ec7e3-13c9-47b8-b10e-5bff2619f164-kube-api-access-58dc7\") pod \"973ec7e3-13c9-47b8-b10e-5bff2619f164\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-catalog-content\") pod \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280419 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvm4m\" (UniqueName: \"kubernetes.io/projected/df4ea2d8-4be0-4e30-b48a-484a93d725b0-kube-api-access-tvm4m\") pod \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280483 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-utilities\") pod \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280519 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-utilities\") pod \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280543 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-trusted-ca\") pod \"973ec7e3-13c9-47b8-b10e-5bff2619f164\" (UID: \"973ec7e3-13c9-47b8-b10e-5bff2619f164\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280585 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm5ql\" (UniqueName: \"kubernetes.io/projected/3dee68b0-a47b-49fd-a889-7bf3bc58c380-kube-api-access-sm5ql\") pod \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\" (UID: \"3dee68b0-a47b-49fd-a889-7bf3bc58c380\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-utilities\") pod \"f73b6fb9-8420-42fe-9b3d-42d17a204743\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280640 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-catalog-content\") pod \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\" (UID: \"df4ea2d8-4be0-4e30-b48a-484a93d725b0\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280689 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-catalog-content\") pod \"f73b6fb9-8420-42fe-9b3d-42d17a204743\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.280722 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9nv5\" (UniqueName: \"kubernetes.io/projected/f73b6fb9-8420-42fe-9b3d-42d17a204743-kube-api-access-g9nv5\") pod \"f73b6fb9-8420-42fe-9b3d-42d17a204743\" (UID: \"f73b6fb9-8420-42fe-9b3d-42d17a204743\") " Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.281009 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.281027 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd3df7b-e150-490b-9785-ccfab6b264b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.281040 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46s25\" (UniqueName: \"kubernetes.io/projected/5cd3df7b-e150-490b-9785-ccfab6b264b5-kube-api-access-46s25\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.281770 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-utilities" (OuterVolumeSpecName: "utilities") pod "df4ea2d8-4be0-4e30-b48a-484a93d725b0" (UID: "df4ea2d8-4be0-4e30-b48a-484a93d725b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.283377 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "973ec7e3-13c9-47b8-b10e-5bff2619f164" (UID: "973ec7e3-13c9-47b8-b10e-5bff2619f164"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.283697 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-utilities" (OuterVolumeSpecName: "utilities") pod "3dee68b0-a47b-49fd-a889-7bf3bc58c380" (UID: "3dee68b0-a47b-49fd-a889-7bf3bc58c380"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.284306 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-utilities" (OuterVolumeSpecName: "utilities") pod "f73b6fb9-8420-42fe-9b3d-42d17a204743" (UID: "f73b6fb9-8420-42fe-9b3d-42d17a204743"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.284410 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973ec7e3-13c9-47b8-b10e-5bff2619f164-kube-api-access-58dc7" (OuterVolumeSpecName: "kube-api-access-58dc7") pod "973ec7e3-13c9-47b8-b10e-5bff2619f164" (UID: "973ec7e3-13c9-47b8-b10e-5bff2619f164"). InnerVolumeSpecName "kube-api-access-58dc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.286058 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4ea2d8-4be0-4e30-b48a-484a93d725b0-kube-api-access-tvm4m" (OuterVolumeSpecName: "kube-api-access-tvm4m") pod "df4ea2d8-4be0-4e30-b48a-484a93d725b0" (UID: "df4ea2d8-4be0-4e30-b48a-484a93d725b0"). InnerVolumeSpecName "kube-api-access-tvm4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.286071 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dee68b0-a47b-49fd-a889-7bf3bc58c380-kube-api-access-sm5ql" (OuterVolumeSpecName: "kube-api-access-sm5ql") pod "3dee68b0-a47b-49fd-a889-7bf3bc58c380" (UID: "3dee68b0-a47b-49fd-a889-7bf3bc58c380"). InnerVolumeSpecName "kube-api-access-sm5ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.286292 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "973ec7e3-13c9-47b8-b10e-5bff2619f164" (UID: "973ec7e3-13c9-47b8-b10e-5bff2619f164"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.287131 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73b6fb9-8420-42fe-9b3d-42d17a204743-kube-api-access-g9nv5" (OuterVolumeSpecName: "kube-api-access-g9nv5") pod "f73b6fb9-8420-42fe-9b3d-42d17a204743" (UID: "f73b6fb9-8420-42fe-9b3d-42d17a204743"). InnerVolumeSpecName "kube-api-access-g9nv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.309742 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f73b6fb9-8420-42fe-9b3d-42d17a204743" (UID: "f73b6fb9-8420-42fe-9b3d-42d17a204743"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.341247 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dee68b0-a47b-49fd-a889-7bf3bc58c380" (UID: "3dee68b0-a47b-49fd-a889-7bf3bc58c380"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383365 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383474 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9nv5\" (UniqueName: \"kubernetes.io/projected/f73b6fb9-8420-42fe-9b3d-42d17a204743-kube-api-access-g9nv5\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383500 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383517 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58dc7\" (UniqueName: \"kubernetes.io/projected/973ec7e3-13c9-47b8-b10e-5bff2619f164-kube-api-access-58dc7\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383532 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383548 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvm4m\" (UniqueName: \"kubernetes.io/projected/df4ea2d8-4be0-4e30-b48a-484a93d725b0-kube-api-access-tvm4m\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383562 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dee68b0-a47b-49fd-a889-7bf3bc58c380-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383574 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383588 4820 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/973ec7e3-13c9-47b8-b10e-5bff2619f164-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383602 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm5ql\" (UniqueName: \"kubernetes.io/projected/3dee68b0-a47b-49fd-a889-7bf3bc58c380-kube-api-access-sm5ql\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.383616 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73b6fb9-8420-42fe-9b3d-42d17a204743-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.406426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df4ea2d8-4be0-4e30-b48a-484a93d725b0" (UID: "df4ea2d8-4be0-4e30-b48a-484a93d725b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.426997 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jn7nh"] Feb 01 14:26:40 crc kubenswrapper[4820]: W0201 14:26:40.431786 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d0c59a0_904e_4386_9a3b_7980f0b1e697.slice/crio-f5115652344dce6074ebe69b2284cc38a12211ee0da3bef21c1c391325e7c617 WatchSource:0}: Error finding container f5115652344dce6074ebe69b2284cc38a12211ee0da3bef21c1c391325e7c617: Status 404 returned error can't find the container with id f5115652344dce6074ebe69b2284cc38a12211ee0da3bef21c1c391325e7c617 Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.485466 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df4ea2d8-4be0-4e30-b48a-484a93d725b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.985609 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" event={"ID":"0d0c59a0-904e-4386-9a3b-7980f0b1e697","Type":"ContainerStarted","Data":"0811103b040c2abad642d222e8b6401229383f62fd102cf7abe696ee8c0e313d"} Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.985682 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" event={"ID":"0d0c59a0-904e-4386-9a3b-7980f0b1e697","Type":"ContainerStarted","Data":"f5115652344dce6074ebe69b2284cc38a12211ee0da3bef21c1c391325e7c617"} Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.985719 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.987915 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.988722 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkfmb" event={"ID":"f73b6fb9-8420-42fe-9b3d-42d17a204743","Type":"ContainerDied","Data":"9409288cc34cede2c1be5b4412688e80dbba779f2ac6c80f4a13b1922b0d5f7c"} Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.988818 4820 scope.go:117] "RemoveContainer" containerID="4fefd273802b187a6cf47320b83f79969469dd466c1620fd72f160c512720fbe" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.988993 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkfmb" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.993174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5dnc" event={"ID":"5cd3df7b-e150-490b-9785-ccfab6b264b5","Type":"ContainerDied","Data":"c2e6593def29b6014178c79276b5a3c799ccbdf439f4b890fa51a5c9c9d147d2"} Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.993264 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5dnc" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.995038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6kqz" event={"ID":"3dee68b0-a47b-49fd-a889-7bf3bc58c380","Type":"ContainerDied","Data":"fb04e785a253ab3ec846d5e4dee59e8f25fa63535ce3165519f2b5ea0cfd1cfa"} Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.995171 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6kqz" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.998017 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jd9wc" event={"ID":"df4ea2d8-4be0-4e30-b48a-484a93d725b0","Type":"ContainerDied","Data":"31b9502ab198a2b96078dcac540311b21c66e9211966a891aaab58b2787078cc"} Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.998098 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jd9wc" Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.999766 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" event={"ID":"973ec7e3-13c9-47b8-b10e-5bff2619f164","Type":"ContainerDied","Data":"e04961e142979945e79e51c2733cbb7723cac94e8dc0b3ad0177a8d4772cfdc1"} Feb 01 14:26:40 crc kubenswrapper[4820]: I0201 14:26:40.999794 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wgsmb" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.009206 4820 scope.go:117] "RemoveContainer" containerID="56ebc4d6736e61a21a907522e3ea546446f68b8d5e8df8e925d9ec730b9c8055" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.014084 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jn7nh" podStartSLOduration=2.014062109 podStartE2EDuration="2.014062109s" podCreationTimestamp="2026-02-01 14:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:26:41.005023036 +0000 UTC m=+342.525389320" watchObservedRunningTime="2026-02-01 14:26:41.014062109 +0000 UTC m=+342.534428393" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.026384 4820 scope.go:117] "RemoveContainer" containerID="cb4f45c86e1fdd49847a9fe55ff2951958d50e87766a303ba12e1b4445709985" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.057600 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkfmb"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.073865 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkfmb"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.082926 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6kqz"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.088758 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6kqz"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.090614 4820 scope.go:117] "RemoveContainer" containerID="65970a680622174be9829cba1fb57e0dd75afd616f0bd1b0f162d57af0062efd" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.098902 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jd9wc"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.105585 4820 scope.go:117] "RemoveContainer" containerID="21d572e11972efd7350d14c3194ae036d09b1f538e37ea2547ba5d5b6044602b" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.107305 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jd9wc"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.115014 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgsmb"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.123233 4820 scope.go:117] "RemoveContainer" containerID="0ec2f65ff6560662931de6e6881d5bb909de54a157b8b49c9c8b8f1ae409f627" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.125718 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgsmb"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.130184 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5dnc"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.137623 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z5dnc"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.142718 4820 scope.go:117] "RemoveContainer" containerID="7f2a1ae2c4f9950cd8f0aeafd780dbbb7a4796dc9aacc7ca394e0a0e084da54e" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.156658 4820 scope.go:117] "RemoveContainer" containerID="0a6b1c512e29dd838af3193ce4944c03b6998a5afd4de09cb5173753546de5d1" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.170481 4820 scope.go:117] "RemoveContainer" containerID="df26604093974defe4e59716e198e227f60977d274340e979b3799fdba725624" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.183622 4820 scope.go:117] "RemoveContainer" containerID="71b634e1b2b5ce2c0ef7e0aec1ab54abc6a4acb171985584224708847580c2fb" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.201521 4820 scope.go:117] "RemoveContainer" containerID="bd6f7b4a3bb7848abeca988d2f1dd4f6f8dcca89b422d4f81be4252702c3d5f8" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.205362 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" path="/var/lib/kubelet/pods/3dee68b0-a47b-49fd-a889-7bf3bc58c380/volumes" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.206163 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" path="/var/lib/kubelet/pods/5cd3df7b-e150-490b-9785-ccfab6b264b5/volumes" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.207062 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" path="/var/lib/kubelet/pods/973ec7e3-13c9-47b8-b10e-5bff2619f164/volumes" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.208217 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" path="/var/lib/kubelet/pods/df4ea2d8-4be0-4e30-b48a-484a93d725b0/volumes" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.209032 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" path="/var/lib/kubelet/pods/f73b6fb9-8420-42fe-9b3d-42d17a204743/volumes" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.214915 4820 scope.go:117] "RemoveContainer" containerID="8187f366604b4eb05d3b122a600770b61fa44c20cc56ad80d79b2ddd2869babe" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.225656 4820 scope.go:117] "RemoveContainer" containerID="024ece7bc787504b657797c37d098a64c542f7a04a66bc7044adfb8e966d94d7" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803166 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4rv9w"] Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803467 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerName="extract-content" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803479 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerName="extract-content" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803492 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerName="extract-content" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803498 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerName="extract-content" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803506 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803513 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803522 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerName="extract-content" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803528 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerName="extract-content" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803539 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerName="extract-utilities" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803545 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerName="extract-utilities" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803553 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerName="extract-utilities" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803559 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerName="extract-utilities" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803567 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803573 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803580 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803586 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803593 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerName="extract-utilities" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803598 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerName="extract-utilities" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803606 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803611 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803620 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerName="extract-content" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803626 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerName="extract-content" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803633 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerName="extract-utilities" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803638 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerName="extract-utilities" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803646 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803651 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803727 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73b6fb9-8420-42fe-9b3d-42d17a204743" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803739 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4ea2d8-4be0-4e30-b48a-484a93d725b0" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803747 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dee68b0-a47b-49fd-a889-7bf3bc58c380" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803757 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803763 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd3df7b-e150-490b-9785-ccfab6b264b5" containerName="registry-server" Feb 01 14:26:41 crc kubenswrapper[4820]: E0201 14:26:41.803840 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803847 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.803959 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="973ec7e3-13c9-47b8-b10e-5bff2619f164" containerName="marketplace-operator" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.804478 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.807421 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.820740 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rv9w"] Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.903182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-utilities\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.903243 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-catalog-content\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:41 crc kubenswrapper[4820]: I0201 14:26:41.903261 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqg69\" (UniqueName: \"kubernetes.io/projected/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-kube-api-access-mqg69\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.007727 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9hj6"] Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.013113 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-utilities\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.013515 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-catalog-content\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.013669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqg69\" (UniqueName: \"kubernetes.io/projected/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-kube-api-access-mqg69\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.014479 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.016936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-utilities\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.017448 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.018636 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-catalog-content\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.027576 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9hj6"] Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.037211 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqg69\" (UniqueName: \"kubernetes.io/projected/116e78e9-fd64-4d4a-8f86-a7a555f1e36e-kube-api-access-mqg69\") pod \"redhat-marketplace-4rv9w\" (UID: \"116e78e9-fd64-4d4a-8f86-a7a555f1e36e\") " pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.115706 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-catalog-content\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.115757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwkdw\" (UniqueName: \"kubernetes.io/projected/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-kube-api-access-kwkdw\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.115783 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-utilities\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.119305 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.217110 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-catalog-content\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.217432 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwkdw\" (UniqueName: \"kubernetes.io/projected/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-kube-api-access-kwkdw\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.217480 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-utilities\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.217548 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-catalog-content\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.217899 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-utilities\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.238786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwkdw\" (UniqueName: \"kubernetes.io/projected/0fc03fc8-6e28-4787-8916-0d53e1b11ae8-kube-api-access-kwkdw\") pod \"redhat-operators-j9hj6\" (UID: \"0fc03fc8-6e28-4787-8916-0d53e1b11ae8\") " pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.389777 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:42 crc kubenswrapper[4820]: W0201 14:26:42.515625 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod116e78e9_fd64_4d4a_8f86_a7a555f1e36e.slice/crio-5d3d7229df28ec28985e2d19138e92d7a2b680233531d9803a22b9c3ef871218 WatchSource:0}: Error finding container 5d3d7229df28ec28985e2d19138e92d7a2b680233531d9803a22b9c3ef871218: Status 404 returned error can't find the container with id 5d3d7229df28ec28985e2d19138e92d7a2b680233531d9803a22b9c3ef871218 Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.520620 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rv9w"] Feb 01 14:26:42 crc kubenswrapper[4820]: I0201 14:26:42.769765 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9hj6"] Feb 01 14:26:43 crc kubenswrapper[4820]: I0201 14:26:43.042487 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hj6" event={"ID":"0fc03fc8-6e28-4787-8916-0d53e1b11ae8","Type":"ContainerStarted","Data":"f8c540f7418d46bce7259fba69cdc39a2cdcdd7aac21fb9f43a9356ca9336bf6"} Feb 01 14:26:43 crc kubenswrapper[4820]: I0201 14:26:43.044436 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rv9w" event={"ID":"116e78e9-fd64-4d4a-8f86-a7a555f1e36e","Type":"ContainerStarted","Data":"5d3d7229df28ec28985e2d19138e92d7a2b680233531d9803a22b9c3ef871218"} Feb 01 14:26:43 crc kubenswrapper[4820]: I0201 14:26:43.424228 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k7g28" Feb 01 14:26:43 crc kubenswrapper[4820]: I0201 14:26:43.475907 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzchg"] Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.053012 4820 generic.go:334] "Generic (PLEG): container finished" podID="0fc03fc8-6e28-4787-8916-0d53e1b11ae8" containerID="7d69de690b120d6a9e177904d02f1e9a1f5b87fdfdf0dade50082b15854f5439" exitCode=0 Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.053156 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hj6" event={"ID":"0fc03fc8-6e28-4787-8916-0d53e1b11ae8","Type":"ContainerDied","Data":"7d69de690b120d6a9e177904d02f1e9a1f5b87fdfdf0dade50082b15854f5439"} Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.054931 4820 generic.go:334] "Generic (PLEG): container finished" podID="116e78e9-fd64-4d4a-8f86-a7a555f1e36e" containerID="1e5a3fd20018ca30b8ebacd8b6af65b9eadad65e1bac7d0273dd230e079e992f" exitCode=0 Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.055113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rv9w" event={"ID":"116e78e9-fd64-4d4a-8f86-a7a555f1e36e","Type":"ContainerDied","Data":"1e5a3fd20018ca30b8ebacd8b6af65b9eadad65e1bac7d0273dd230e079e992f"} Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.206624 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8nz9l"] Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.207864 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.214276 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.221056 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8nz9l"] Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.346907 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-catalog-content\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.346987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2br\" (UniqueName: \"kubernetes.io/projected/32aaf442-6b0a-4415-a767-4fd051191e47-kube-api-access-gv2br\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.347081 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-utilities\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.404994 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5b9w"] Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.406356 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.408063 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.415152 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5b9w"] Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.448312 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-catalog-content\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.448359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv2br\" (UniqueName: \"kubernetes.io/projected/32aaf442-6b0a-4415-a767-4fd051191e47-kube-api-access-gv2br\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.448407 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-utilities\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.448960 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-catalog-content\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.449048 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-utilities\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.467566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv2br\" (UniqueName: \"kubernetes.io/projected/32aaf442-6b0a-4415-a767-4fd051191e47-kube-api-access-gv2br\") pod \"certified-operators-8nz9l\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.549635 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd93d0a-d294-437c-9d6d-b840be862df0-catalog-content\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.549678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd93d0a-d294-437c-9d6d-b840be862df0-utilities\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.549766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jctm9\" (UniqueName: \"kubernetes.io/projected/fdd93d0a-d294-437c-9d6d-b840be862df0-kube-api-access-jctm9\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.570513 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.650533 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd93d0a-d294-437c-9d6d-b840be862df0-catalog-content\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.650583 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd93d0a-d294-437c-9d6d-b840be862df0-utilities\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.650639 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jctm9\" (UniqueName: \"kubernetes.io/projected/fdd93d0a-d294-437c-9d6d-b840be862df0-kube-api-access-jctm9\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.651628 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd93d0a-d294-437c-9d6d-b840be862df0-catalog-content\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.651740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd93d0a-d294-437c-9d6d-b840be862df0-utilities\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.678380 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jctm9\" (UniqueName: \"kubernetes.io/projected/fdd93d0a-d294-437c-9d6d-b840be862df0-kube-api-access-jctm9\") pod \"community-operators-z5b9w\" (UID: \"fdd93d0a-d294-437c-9d6d-b840be862df0\") " pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.731935 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:44 crc kubenswrapper[4820]: I0201 14:26:44.973776 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8nz9l"] Feb 01 14:26:44 crc kubenswrapper[4820]: W0201 14:26:44.980832 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32aaf442_6b0a_4415_a767_4fd051191e47.slice/crio-4dc472ea2e3c1d87a651b36165cbf1c067b89ee5d06ae930ac08eaa0ccb5046a WatchSource:0}: Error finding container 4dc472ea2e3c1d87a651b36165cbf1c067b89ee5d06ae930ac08eaa0ccb5046a: Status 404 returned error can't find the container with id 4dc472ea2e3c1d87a651b36165cbf1c067b89ee5d06ae930ac08eaa0ccb5046a Feb 01 14:26:45 crc kubenswrapper[4820]: I0201 14:26:45.061318 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hj6" event={"ID":"0fc03fc8-6e28-4787-8916-0d53e1b11ae8","Type":"ContainerStarted","Data":"f12caaf168ac19e93c6ced3c82a67763ea3ce16d6ce88ec0a00ca9b8523559cc"} Feb 01 14:26:45 crc kubenswrapper[4820]: I0201 14:26:45.063054 4820 generic.go:334] "Generic (PLEG): container finished" podID="116e78e9-fd64-4d4a-8f86-a7a555f1e36e" containerID="b4112c31e3da32485d0e715e0026a4b44cf17fd34552e0701f8b60b496b081e5" exitCode=0 Feb 01 14:26:45 crc kubenswrapper[4820]: I0201 14:26:45.063122 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rv9w" event={"ID":"116e78e9-fd64-4d4a-8f86-a7a555f1e36e","Type":"ContainerDied","Data":"b4112c31e3da32485d0e715e0026a4b44cf17fd34552e0701f8b60b496b081e5"} Feb 01 14:26:45 crc kubenswrapper[4820]: I0201 14:26:45.063959 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nz9l" event={"ID":"32aaf442-6b0a-4415-a767-4fd051191e47","Type":"ContainerStarted","Data":"4dc472ea2e3c1d87a651b36165cbf1c067b89ee5d06ae930ac08eaa0ccb5046a"} Feb 01 14:26:45 crc kubenswrapper[4820]: I0201 14:26:45.150409 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5b9w"] Feb 01 14:26:45 crc kubenswrapper[4820]: W0201 14:26:45.160822 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdd93d0a_d294_437c_9d6d_b840be862df0.slice/crio-ce11c748706afc5d30ce5132507fdbc28ac050737a83d55d855aa2e0a78283e3 WatchSource:0}: Error finding container ce11c748706afc5d30ce5132507fdbc28ac050737a83d55d855aa2e0a78283e3: Status 404 returned error can't find the container with id ce11c748706afc5d30ce5132507fdbc28ac050737a83d55d855aa2e0a78283e3 Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.082119 4820 generic.go:334] "Generic (PLEG): container finished" podID="32aaf442-6b0a-4415-a767-4fd051191e47" containerID="aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8" exitCode=0 Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.082366 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nz9l" event={"ID":"32aaf442-6b0a-4415-a767-4fd051191e47","Type":"ContainerDied","Data":"aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8"} Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.089953 4820 generic.go:334] "Generic (PLEG): container finished" podID="0fc03fc8-6e28-4787-8916-0d53e1b11ae8" containerID="f12caaf168ac19e93c6ced3c82a67763ea3ce16d6ce88ec0a00ca9b8523559cc" exitCode=0 Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.090045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hj6" event={"ID":"0fc03fc8-6e28-4787-8916-0d53e1b11ae8","Type":"ContainerDied","Data":"f12caaf168ac19e93c6ced3c82a67763ea3ce16d6ce88ec0a00ca9b8523559cc"} Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.094561 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rv9w" event={"ID":"116e78e9-fd64-4d4a-8f86-a7a555f1e36e","Type":"ContainerStarted","Data":"26f81a084d5f1d48e713fedf4e1f73dd42f292dc168575651128e93e1810669f"} Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.096169 4820 generic.go:334] "Generic (PLEG): container finished" podID="fdd93d0a-d294-437c-9d6d-b840be862df0" containerID="f4dd523711b2ff32890313aeacc596d63741cd75e3307accbfdf9bad18d5e5cc" exitCode=0 Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.096192 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5b9w" event={"ID":"fdd93d0a-d294-437c-9d6d-b840be862df0","Type":"ContainerDied","Data":"f4dd523711b2ff32890313aeacc596d63741cd75e3307accbfdf9bad18d5e5cc"} Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.096206 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5b9w" event={"ID":"fdd93d0a-d294-437c-9d6d-b840be862df0","Type":"ContainerStarted","Data":"ce11c748706afc5d30ce5132507fdbc28ac050737a83d55d855aa2e0a78283e3"} Feb 01 14:26:46 crc kubenswrapper[4820]: I0201 14:26:46.137159 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4rv9w" podStartSLOduration=3.730378863 podStartE2EDuration="5.137142196s" podCreationTimestamp="2026-02-01 14:26:41 +0000 UTC" firstStartedPulling="2026-02-01 14:26:44.056611909 +0000 UTC m=+345.576978193" lastFinishedPulling="2026-02-01 14:26:45.463375242 +0000 UTC m=+346.983741526" observedRunningTime="2026-02-01 14:26:46.119147802 +0000 UTC m=+347.639514086" watchObservedRunningTime="2026-02-01 14:26:46.137142196 +0000 UTC m=+347.657508480" Feb 01 14:26:47 crc kubenswrapper[4820]: I0201 14:26:47.108098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hj6" event={"ID":"0fc03fc8-6e28-4787-8916-0d53e1b11ae8","Type":"ContainerStarted","Data":"568c8fabb22ff45c805f933b7bb331228a9b89953a55eb63a630e6993b7180b9"} Feb 01 14:26:47 crc kubenswrapper[4820]: I0201 14:26:47.109948 4820 generic.go:334] "Generic (PLEG): container finished" podID="fdd93d0a-d294-437c-9d6d-b840be862df0" containerID="77b65a409ffd996391705f2435274ccc1c05eefc570364e3a33684bbad85e46e" exitCode=0 Feb 01 14:26:47 crc kubenswrapper[4820]: I0201 14:26:47.110008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5b9w" event={"ID":"fdd93d0a-d294-437c-9d6d-b840be862df0","Type":"ContainerDied","Data":"77b65a409ffd996391705f2435274ccc1c05eefc570364e3a33684bbad85e46e"} Feb 01 14:26:47 crc kubenswrapper[4820]: I0201 14:26:47.120056 4820 generic.go:334] "Generic (PLEG): container finished" podID="32aaf442-6b0a-4415-a767-4fd051191e47" containerID="35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2" exitCode=0 Feb 01 14:26:47 crc kubenswrapper[4820]: I0201 14:26:47.120123 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nz9l" event={"ID":"32aaf442-6b0a-4415-a767-4fd051191e47","Type":"ContainerDied","Data":"35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2"} Feb 01 14:26:47 crc kubenswrapper[4820]: I0201 14:26:47.137284 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9hj6" podStartSLOduration=3.6979798820000003 podStartE2EDuration="6.137260202s" podCreationTimestamp="2026-02-01 14:26:41 +0000 UTC" firstStartedPulling="2026-02-01 14:26:44.055300167 +0000 UTC m=+345.575666451" lastFinishedPulling="2026-02-01 14:26:46.494580487 +0000 UTC m=+348.014946771" observedRunningTime="2026-02-01 14:26:47.126143317 +0000 UTC m=+348.646509601" watchObservedRunningTime="2026-02-01 14:26:47.137260202 +0000 UTC m=+348.657626486" Feb 01 14:26:48 crc kubenswrapper[4820]: I0201 14:26:48.128100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nz9l" event={"ID":"32aaf442-6b0a-4415-a767-4fd051191e47","Type":"ContainerStarted","Data":"403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9"} Feb 01 14:26:48 crc kubenswrapper[4820]: I0201 14:26:48.131111 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5b9w" event={"ID":"fdd93d0a-d294-437c-9d6d-b840be862df0","Type":"ContainerStarted","Data":"fc8d3dc3d00d19e05d38948a11a3603410162ab7bebba001cca3f1e0965663f1"} Feb 01 14:26:48 crc kubenswrapper[4820]: I0201 14:26:48.158759 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8nz9l" podStartSLOduration=2.6530658369999998 podStartE2EDuration="4.158742426s" podCreationTimestamp="2026-02-01 14:26:44 +0000 UTC" firstStartedPulling="2026-02-01 14:26:46.084432193 +0000 UTC m=+347.604798487" lastFinishedPulling="2026-02-01 14:26:47.590108792 +0000 UTC m=+349.110475076" observedRunningTime="2026-02-01 14:26:48.157574347 +0000 UTC m=+349.677940631" watchObservedRunningTime="2026-02-01 14:26:48.158742426 +0000 UTC m=+349.679108710" Feb 01 14:26:48 crc kubenswrapper[4820]: I0201 14:26:48.178217 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5b9w" podStartSLOduration=2.741194337 podStartE2EDuration="4.178196287s" podCreationTimestamp="2026-02-01 14:26:44 +0000 UTC" firstStartedPulling="2026-02-01 14:26:46.097192049 +0000 UTC m=+347.617558333" lastFinishedPulling="2026-02-01 14:26:47.534193999 +0000 UTC m=+349.054560283" observedRunningTime="2026-02-01 14:26:48.17747535 +0000 UTC m=+349.697841634" watchObservedRunningTime="2026-02-01 14:26:48.178196287 +0000 UTC m=+349.698562581" Feb 01 14:26:52 crc kubenswrapper[4820]: I0201 14:26:52.120198 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:52 crc kubenswrapper[4820]: I0201 14:26:52.121780 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:52 crc kubenswrapper[4820]: I0201 14:26:52.179697 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:52 crc kubenswrapper[4820]: I0201 14:26:52.390830 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:52 crc kubenswrapper[4820]: I0201 14:26:52.392294 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:26:53 crc kubenswrapper[4820]: I0201 14:26:53.208943 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4rv9w" Feb 01 14:26:53 crc kubenswrapper[4820]: I0201 14:26:53.436564 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9hj6" podUID="0fc03fc8-6e28-4787-8916-0d53e1b11ae8" containerName="registry-server" probeResult="failure" output=< Feb 01 14:26:53 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 01 14:26:53 crc kubenswrapper[4820]: > Feb 01 14:26:54 crc kubenswrapper[4820]: I0201 14:26:54.571370 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:54 crc kubenswrapper[4820]: I0201 14:26:54.571421 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:54 crc kubenswrapper[4820]: I0201 14:26:54.610506 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:26:54 crc kubenswrapper[4820]: I0201 14:26:54.732824 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:54 crc kubenswrapper[4820]: I0201 14:26:54.732898 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:54 crc kubenswrapper[4820]: I0201 14:26:54.768477 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:55 crc kubenswrapper[4820]: I0201 14:26:55.220029 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5b9w" Feb 01 14:26:55 crc kubenswrapper[4820]: I0201 14:26:55.224767 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:27:02 crc kubenswrapper[4820]: I0201 14:27:02.441651 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:27:02 crc kubenswrapper[4820]: I0201 14:27:02.480971 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9hj6" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.509552 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" podUID="e180d67d-fdb1-4874-a793-abe25452fe6d" containerName="registry" containerID="cri-o://8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73" gracePeriod=30 Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.686629 4820 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-fzchg container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.12:5000/healthz\": dial tcp 10.217.0.12:5000: connect: connection refused" start-of-body= Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.686955 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" podUID="e180d67d-fdb1-4874-a793-abe25452fe6d" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.12:5000/healthz\": dial tcp 10.217.0.12:5000: connect: connection refused" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.874992 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.974504 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz24t\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-kube-api-access-xz24t\") pod \"e180d67d-fdb1-4874-a793-abe25452fe6d\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.974569 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-bound-sa-token\") pod \"e180d67d-fdb1-4874-a793-abe25452fe6d\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.974629 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e180d67d-fdb1-4874-a793-abe25452fe6d-installation-pull-secrets\") pod \"e180d67d-fdb1-4874-a793-abe25452fe6d\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.974664 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-trusted-ca\") pod \"e180d67d-fdb1-4874-a793-abe25452fe6d\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.974691 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-certificates\") pod \"e180d67d-fdb1-4874-a793-abe25452fe6d\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.974727 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e180d67d-fdb1-4874-a793-abe25452fe6d-ca-trust-extracted\") pod \"e180d67d-fdb1-4874-a793-abe25452fe6d\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.974757 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-tls\") pod \"e180d67d-fdb1-4874-a793-abe25452fe6d\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.974980 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e180d67d-fdb1-4874-a793-abe25452fe6d\" (UID: \"e180d67d-fdb1-4874-a793-abe25452fe6d\") " Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.976160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e180d67d-fdb1-4874-a793-abe25452fe6d" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.976708 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e180d67d-fdb1-4874-a793-abe25452fe6d" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.988201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e180d67d-fdb1-4874-a793-abe25452fe6d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e180d67d-fdb1-4874-a793-abe25452fe6d" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.989276 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e180d67d-fdb1-4874-a793-abe25452fe6d" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.989673 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e180d67d-fdb1-4874-a793-abe25452fe6d" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.990017 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-kube-api-access-xz24t" (OuterVolumeSpecName: "kube-api-access-xz24t") pod "e180d67d-fdb1-4874-a793-abe25452fe6d" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d"). InnerVolumeSpecName "kube-api-access-xz24t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.990292 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e180d67d-fdb1-4874-a793-abe25452fe6d" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 01 14:27:08 crc kubenswrapper[4820]: I0201 14:27:08.994232 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e180d67d-fdb1-4874-a793-abe25452fe6d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e180d67d-fdb1-4874-a793-abe25452fe6d" (UID: "e180d67d-fdb1-4874-a793-abe25452fe6d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.076788 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz24t\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-kube-api-access-xz24t\") on node \"crc\" DevicePath \"\"" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.076818 4820 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.076827 4820 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e180d67d-fdb1-4874-a793-abe25452fe6d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.076837 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.076845 4820 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.076853 4820 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e180d67d-fdb1-4874-a793-abe25452fe6d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.076863 4820 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e180d67d-fdb1-4874-a793-abe25452fe6d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.250960 4820 generic.go:334] "Generic (PLEG): container finished" podID="e180d67d-fdb1-4874-a793-abe25452fe6d" containerID="8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73" exitCode=0 Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.251023 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" event={"ID":"e180d67d-fdb1-4874-a793-abe25452fe6d","Type":"ContainerDied","Data":"8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73"} Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.251054 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" event={"ID":"e180d67d-fdb1-4874-a793-abe25452fe6d","Type":"ContainerDied","Data":"344664dba510c7fa1121b6a5af9f9eb2fe5509a562d91517cce79cc71a7860ff"} Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.251078 4820 scope.go:117] "RemoveContainer" containerID="8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.251285 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fzchg" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.272725 4820 scope.go:117] "RemoveContainer" containerID="8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73" Feb 01 14:27:09 crc kubenswrapper[4820]: E0201 14:27:09.275099 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73\": container with ID starting with 8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73 not found: ID does not exist" containerID="8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.275139 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73"} err="failed to get container status \"8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73\": rpc error: code = NotFound desc = could not find container \"8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73\": container with ID starting with 8a58e20fbce74158855f23b72f8ecb3f597aaf95f003605e8ba007a694f8bb73 not found: ID does not exist" Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.286559 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzchg"] Feb 01 14:27:09 crc kubenswrapper[4820]: I0201 14:27:09.291050 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzchg"] Feb 01 14:27:11 crc kubenswrapper[4820]: I0201 14:27:11.204502 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e180d67d-fdb1-4874-a793-abe25452fe6d" path="/var/lib/kubelet/pods/e180d67d-fdb1-4874-a793-abe25452fe6d/volumes" Feb 01 14:27:19 crc kubenswrapper[4820]: I0201 14:27:19.242678 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:27:19 crc kubenswrapper[4820]: I0201 14:27:19.243296 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:27:49 crc kubenswrapper[4820]: I0201 14:27:49.243150 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:27:49 crc kubenswrapper[4820]: I0201 14:27:49.243746 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.242470 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.243074 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.243136 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.243901 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b99d9875d1bdea691633e5af419b10acd106e3faee10becf6662bf488aeca9f"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.243991 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://7b99d9875d1bdea691633e5af419b10acd106e3faee10becf6662bf488aeca9f" gracePeriod=600 Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.655307 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="7b99d9875d1bdea691633e5af419b10acd106e3faee10becf6662bf488aeca9f" exitCode=0 Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.655359 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"7b99d9875d1bdea691633e5af419b10acd106e3faee10becf6662bf488aeca9f"} Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.655386 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"467d256019f163fae0b6c21f79014e504c0f178df2ba0ae24f36681ac92b6dfe"} Feb 01 14:28:19 crc kubenswrapper[4820]: I0201 14:28:19.655403 4820 scope.go:117] "RemoveContainer" containerID="b6876796e0dceb1527f6fb8ddb92420449461ee7f3b18b445eaeb87e4702e1d4" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.214997 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn"] Feb 01 14:30:00 crc kubenswrapper[4820]: E0201 14:30:00.216084 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e180d67d-fdb1-4874-a793-abe25452fe6d" containerName="registry" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.216116 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e180d67d-fdb1-4874-a793-abe25452fe6d" containerName="registry" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.216335 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e180d67d-fdb1-4874-a793-abe25452fe6d" containerName="registry" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.217081 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.223628 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.224066 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.227940 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn"] Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.230461 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffz7z\" (UniqueName: \"kubernetes.io/projected/3efd4961-21a3-451a-aca3-f32bd9e1d045-kube-api-access-ffz7z\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.230561 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3efd4961-21a3-451a-aca3-f32bd9e1d045-config-volume\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.230578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3efd4961-21a3-451a-aca3-f32bd9e1d045-secret-volume\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.332002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffz7z\" (UniqueName: \"kubernetes.io/projected/3efd4961-21a3-451a-aca3-f32bd9e1d045-kube-api-access-ffz7z\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.332373 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3efd4961-21a3-451a-aca3-f32bd9e1d045-secret-volume\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.332517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3efd4961-21a3-451a-aca3-f32bd9e1d045-config-volume\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.333720 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3efd4961-21a3-451a-aca3-f32bd9e1d045-config-volume\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.339438 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3efd4961-21a3-451a-aca3-f32bd9e1d045-secret-volume\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.354919 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffz7z\" (UniqueName: \"kubernetes.io/projected/3efd4961-21a3-451a-aca3-f32bd9e1d045-kube-api-access-ffz7z\") pod \"collect-profiles-29499270-bqfsn\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.539858 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:00 crc kubenswrapper[4820]: I0201 14:30:00.739351 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn"] Feb 01 14:30:01 crc kubenswrapper[4820]: I0201 14:30:01.328089 4820 generic.go:334] "Generic (PLEG): container finished" podID="3efd4961-21a3-451a-aca3-f32bd9e1d045" containerID="95a8b4c937921840549b43342aabb09eaa31b74f1f372f8ac308af936f9c2472" exitCode=0 Feb 01 14:30:01 crc kubenswrapper[4820]: I0201 14:30:01.328219 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" event={"ID":"3efd4961-21a3-451a-aca3-f32bd9e1d045","Type":"ContainerDied","Data":"95a8b4c937921840549b43342aabb09eaa31b74f1f372f8ac308af936f9c2472"} Feb 01 14:30:01 crc kubenswrapper[4820]: I0201 14:30:01.328477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" event={"ID":"3efd4961-21a3-451a-aca3-f32bd9e1d045","Type":"ContainerStarted","Data":"f823d79c15348e1d6c8a9afb600e76994d78d46b810d8230259a9f5926509a65"} Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.587783 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.764638 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffz7z\" (UniqueName: \"kubernetes.io/projected/3efd4961-21a3-451a-aca3-f32bd9e1d045-kube-api-access-ffz7z\") pod \"3efd4961-21a3-451a-aca3-f32bd9e1d045\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.764697 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3efd4961-21a3-451a-aca3-f32bd9e1d045-config-volume\") pod \"3efd4961-21a3-451a-aca3-f32bd9e1d045\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.764756 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3efd4961-21a3-451a-aca3-f32bd9e1d045-secret-volume\") pod \"3efd4961-21a3-451a-aca3-f32bd9e1d045\" (UID: \"3efd4961-21a3-451a-aca3-f32bd9e1d045\") " Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.766109 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3efd4961-21a3-451a-aca3-f32bd9e1d045-config-volume" (OuterVolumeSpecName: "config-volume") pod "3efd4961-21a3-451a-aca3-f32bd9e1d045" (UID: "3efd4961-21a3-451a-aca3-f32bd9e1d045"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.769275 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efd4961-21a3-451a-aca3-f32bd9e1d045-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3efd4961-21a3-451a-aca3-f32bd9e1d045" (UID: "3efd4961-21a3-451a-aca3-f32bd9e1d045"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.773199 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efd4961-21a3-451a-aca3-f32bd9e1d045-kube-api-access-ffz7z" (OuterVolumeSpecName: "kube-api-access-ffz7z") pod "3efd4961-21a3-451a-aca3-f32bd9e1d045" (UID: "3efd4961-21a3-451a-aca3-f32bd9e1d045"). InnerVolumeSpecName "kube-api-access-ffz7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.865648 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffz7z\" (UniqueName: \"kubernetes.io/projected/3efd4961-21a3-451a-aca3-f32bd9e1d045-kube-api-access-ffz7z\") on node \"crc\" DevicePath \"\"" Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.865679 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3efd4961-21a3-451a-aca3-f32bd9e1d045-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 14:30:02 crc kubenswrapper[4820]: I0201 14:30:02.865690 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3efd4961-21a3-451a-aca3-f32bd9e1d045-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 14:30:03 crc kubenswrapper[4820]: I0201 14:30:03.341488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" event={"ID":"3efd4961-21a3-451a-aca3-f32bd9e1d045","Type":"ContainerDied","Data":"f823d79c15348e1d6c8a9afb600e76994d78d46b810d8230259a9f5926509a65"} Feb 01 14:30:03 crc kubenswrapper[4820]: I0201 14:30:03.341524 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f823d79c15348e1d6c8a9afb600e76994d78d46b810d8230259a9f5926509a65" Feb 01 14:30:03 crc kubenswrapper[4820]: I0201 14:30:03.341551 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn" Feb 01 14:30:19 crc kubenswrapper[4820]: I0201 14:30:19.242928 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:30:19 crc kubenswrapper[4820]: I0201 14:30:19.244136 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:30:49 crc kubenswrapper[4820]: I0201 14:30:49.242613 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:30:49 crc kubenswrapper[4820]: I0201 14:30:49.243418 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.242184 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.242720 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.242760 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.243238 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"467d256019f163fae0b6c21f79014e504c0f178df2ba0ae24f36681ac92b6dfe"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.243279 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://467d256019f163fae0b6c21f79014e504c0f178df2ba0ae24f36681ac92b6dfe" gracePeriod=600 Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.761647 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="467d256019f163fae0b6c21f79014e504c0f178df2ba0ae24f36681ac92b6dfe" exitCode=0 Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.761808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"467d256019f163fae0b6c21f79014e504c0f178df2ba0ae24f36681ac92b6dfe"} Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.761941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"f42c80990978c64d79b08815b144c253d004fd2bc9ddecf8ee4b80f9fef30c27"} Feb 01 14:31:19 crc kubenswrapper[4820]: I0201 14:31:19.761962 4820 scope.go:117] "RemoveContainer" containerID="7b99d9875d1bdea691633e5af419b10acd106e3faee10becf6662bf488aeca9f" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.385141 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd"] Feb 01 14:31:20 crc kubenswrapper[4820]: E0201 14:31:20.385425 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efd4961-21a3-451a-aca3-f32bd9e1d045" containerName="collect-profiles" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.385445 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efd4961-21a3-451a-aca3-f32bd9e1d045" containerName="collect-profiles" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.385639 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efd4961-21a3-451a-aca3-f32bd9e1d045" containerName="collect-profiles" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.386230 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.387774 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sxtcl" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.388576 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.389482 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.396025 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd"] Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.407950 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-s2v7v"] Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.408973 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-s2v7v" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.412036 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wrxsc" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.417284 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-s2v7v"] Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.427320 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-77x8h"] Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.428183 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.430455 4820 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9c27s" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.443588 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-77x8h"] Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.488019 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvspx\" (UniqueName: \"kubernetes.io/projected/d6214179-43bd-4284-83fa-62d0799035c3-kube-api-access-mvspx\") pod \"cert-manager-webhook-687f57d79b-77x8h\" (UID: \"d6214179-43bd-4284-83fa-62d0799035c3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.488075 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5mv\" (UniqueName: \"kubernetes.io/projected/abf63078-c1c6-489d-a164-ed6693f7cc18-kube-api-access-8g5mv\") pod \"cert-manager-858654f9db-s2v7v\" (UID: \"abf63078-c1c6-489d-a164-ed6693f7cc18\") " pod="cert-manager/cert-manager-858654f9db-s2v7v" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.488105 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6cq\" (UniqueName: \"kubernetes.io/projected/0583e704-ae2a-4945-bb44-316d410c6600-kube-api-access-pb6cq\") pod \"cert-manager-cainjector-cf98fcc89-l9lrd\" (UID: \"0583e704-ae2a-4945-bb44-316d410c6600\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.589399 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvspx\" (UniqueName: \"kubernetes.io/projected/d6214179-43bd-4284-83fa-62d0799035c3-kube-api-access-mvspx\") pod \"cert-manager-webhook-687f57d79b-77x8h\" (UID: \"d6214179-43bd-4284-83fa-62d0799035c3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.589738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5mv\" (UniqueName: \"kubernetes.io/projected/abf63078-c1c6-489d-a164-ed6693f7cc18-kube-api-access-8g5mv\") pod \"cert-manager-858654f9db-s2v7v\" (UID: \"abf63078-c1c6-489d-a164-ed6693f7cc18\") " pod="cert-manager/cert-manager-858654f9db-s2v7v" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.589838 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb6cq\" (UniqueName: \"kubernetes.io/projected/0583e704-ae2a-4945-bb44-316d410c6600-kube-api-access-pb6cq\") pod \"cert-manager-cainjector-cf98fcc89-l9lrd\" (UID: \"0583e704-ae2a-4945-bb44-316d410c6600\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.610901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5mv\" (UniqueName: \"kubernetes.io/projected/abf63078-c1c6-489d-a164-ed6693f7cc18-kube-api-access-8g5mv\") pod \"cert-manager-858654f9db-s2v7v\" (UID: \"abf63078-c1c6-489d-a164-ed6693f7cc18\") " pod="cert-manager/cert-manager-858654f9db-s2v7v" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.612518 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvspx\" (UniqueName: \"kubernetes.io/projected/d6214179-43bd-4284-83fa-62d0799035c3-kube-api-access-mvspx\") pod \"cert-manager-webhook-687f57d79b-77x8h\" (UID: \"d6214179-43bd-4284-83fa-62d0799035c3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.612646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb6cq\" (UniqueName: \"kubernetes.io/projected/0583e704-ae2a-4945-bb44-316d410c6600-kube-api-access-pb6cq\") pod \"cert-manager-cainjector-cf98fcc89-l9lrd\" (UID: \"0583e704-ae2a-4945-bb44-316d410c6600\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.703076 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.722318 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-s2v7v" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.740745 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.897838 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd"] Feb 01 14:31:20 crc kubenswrapper[4820]: W0201 14:31:20.899246 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0583e704_ae2a_4945_bb44_316d410c6600.slice/crio-22d756da4beb5535e80d7c4b4ed1db326a7116a8ec96c33931e16613c0408267 WatchSource:0}: Error finding container 22d756da4beb5535e80d7c4b4ed1db326a7116a8ec96c33931e16613c0408267: Status 404 returned error can't find the container with id 22d756da4beb5535e80d7c4b4ed1db326a7116a8ec96c33931e16613c0408267 Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.902289 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.936965 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-s2v7v"] Feb 01 14:31:20 crc kubenswrapper[4820]: W0201 14:31:20.947041 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabf63078_c1c6_489d_a164_ed6693f7cc18.slice/crio-2c7e96d35ce4cb12c6a16d8139eebe9c3a1c0ccb705edd76c5f8d65f4071968b WatchSource:0}: Error finding container 2c7e96d35ce4cb12c6a16d8139eebe9c3a1c0ccb705edd76c5f8d65f4071968b: Status 404 returned error can't find the container with id 2c7e96d35ce4cb12c6a16d8139eebe9c3a1c0ccb705edd76c5f8d65f4071968b Feb 01 14:31:20 crc kubenswrapper[4820]: I0201 14:31:20.976371 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-77x8h"] Feb 01 14:31:20 crc kubenswrapper[4820]: W0201 14:31:20.981549 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6214179_43bd_4284_83fa_62d0799035c3.slice/crio-831297f81b062fefe5dde286e83df6c781afbc9fb1c77edd9135838118919618 WatchSource:0}: Error finding container 831297f81b062fefe5dde286e83df6c781afbc9fb1c77edd9135838118919618: Status 404 returned error can't find the container with id 831297f81b062fefe5dde286e83df6c781afbc9fb1c77edd9135838118919618 Feb 01 14:31:21 crc kubenswrapper[4820]: I0201 14:31:21.783616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" event={"ID":"d6214179-43bd-4284-83fa-62d0799035c3","Type":"ContainerStarted","Data":"831297f81b062fefe5dde286e83df6c781afbc9fb1c77edd9135838118919618"} Feb 01 14:31:21 crc kubenswrapper[4820]: I0201 14:31:21.784951 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-s2v7v" event={"ID":"abf63078-c1c6-489d-a164-ed6693f7cc18","Type":"ContainerStarted","Data":"2c7e96d35ce4cb12c6a16d8139eebe9c3a1c0ccb705edd76c5f8d65f4071968b"} Feb 01 14:31:21 crc kubenswrapper[4820]: I0201 14:31:21.785647 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd" event={"ID":"0583e704-ae2a-4945-bb44-316d410c6600","Type":"ContainerStarted","Data":"22d756da4beb5535e80d7c4b4ed1db326a7116a8ec96c33931e16613c0408267"} Feb 01 14:31:24 crc kubenswrapper[4820]: I0201 14:31:24.801469 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd" event={"ID":"0583e704-ae2a-4945-bb44-316d410c6600","Type":"ContainerStarted","Data":"ffeceec50e8b53567881ea2ff1c4614f9af4702ec58970e46e843f9d5e93599a"} Feb 01 14:31:24 crc kubenswrapper[4820]: I0201 14:31:24.803105 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" event={"ID":"d6214179-43bd-4284-83fa-62d0799035c3","Type":"ContainerStarted","Data":"105cbd80075b40b562c44ed199a41ce088f48a25e9ea6dc1ab4ebced9d0298d1"} Feb 01 14:31:24 crc kubenswrapper[4820]: I0201 14:31:24.803271 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" Feb 01 14:31:24 crc kubenswrapper[4820]: I0201 14:31:24.827026 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9lrd" podStartSLOduration=1.341266831 podStartE2EDuration="4.827011889s" podCreationTimestamp="2026-02-01 14:31:20 +0000 UTC" firstStartedPulling="2026-02-01 14:31:20.901769134 +0000 UTC m=+622.422135418" lastFinishedPulling="2026-02-01 14:31:24.387514192 +0000 UTC m=+625.907880476" observedRunningTime="2026-02-01 14:31:24.826000664 +0000 UTC m=+626.346366958" watchObservedRunningTime="2026-02-01 14:31:24.827011889 +0000 UTC m=+626.347378173" Feb 01 14:31:24 crc kubenswrapper[4820]: I0201 14:31:24.849736 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" podStartSLOduration=1.452269289 podStartE2EDuration="4.849718505s" podCreationTimestamp="2026-02-01 14:31:20 +0000 UTC" firstStartedPulling="2026-02-01 14:31:20.983614927 +0000 UTC m=+622.503981211" lastFinishedPulling="2026-02-01 14:31:24.381064143 +0000 UTC m=+625.901430427" observedRunningTime="2026-02-01 14:31:24.846964748 +0000 UTC m=+626.367331042" watchObservedRunningTime="2026-02-01 14:31:24.849718505 +0000 UTC m=+626.370084789" Feb 01 14:31:25 crc kubenswrapper[4820]: I0201 14:31:25.808702 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-s2v7v" event={"ID":"abf63078-c1c6-489d-a164-ed6693f7cc18","Type":"ContainerStarted","Data":"eef91bddfcebd607cefedb768e08a29b6068b3f00ea8cd129d84c754c96e6562"} Feb 01 14:31:25 crc kubenswrapper[4820]: I0201 14:31:25.821761 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-s2v7v" podStartSLOduration=1.960235865 podStartE2EDuration="5.821740289s" podCreationTimestamp="2026-02-01 14:31:20 +0000 UTC" firstStartedPulling="2026-02-01 14:31:20.952530687 +0000 UTC m=+622.472896971" lastFinishedPulling="2026-02-01 14:31:24.814035111 +0000 UTC m=+626.334401395" observedRunningTime="2026-02-01 14:31:25.821265787 +0000 UTC m=+627.341632101" watchObservedRunningTime="2026-02-01 14:31:25.821740289 +0000 UTC m=+627.342106573" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.352048 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m4skx"] Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.352628 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovn-controller" containerID="cri-o://0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6" gracePeriod=30 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.352694 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="northd" containerID="cri-o://016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9" gracePeriod=30 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.352731 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovn-acl-logging" containerID="cri-o://c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7" gracePeriod=30 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.352772 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kube-rbac-proxy-node" containerID="cri-o://eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1" gracePeriod=30 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.352833 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="sbdb" containerID="cri-o://c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037" gracePeriod=30 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.352849 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="nbdb" containerID="cri-o://92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d" gracePeriod=30 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.352864 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f" gracePeriod=30 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.382515 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" containerID="cri-o://1fdff6e21a556b7683444220a0668c8589786a358238fba9043bb1e7ce3d8206" gracePeriod=30 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.744441 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-77x8h" Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.806047 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037 is running failed: container process not found" containerID="c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.806565 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037 is running failed: container process not found" containerID="c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.806621 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d is running failed: container process not found" containerID="92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.807169 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037 is running failed: container process not found" containerID="c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.807191 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d is running failed: container process not found" containerID="92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.807216 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="sbdb" Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.807743 4820 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d is running failed: container process not found" containerID="92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.807847 4820 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="nbdb" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.840139 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/2.log" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.840929 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/1.log" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.841021 4820 generic.go:334] "Generic (PLEG): container finished" podID="20f8fae3-1755-461a-8748-a0033423ad5a" containerID="afac57817affcc730677ddc4fa2a32dfe0dbe317a5a6f1b31a7e83564ca75eb7" exitCode=2 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.841143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q922s" event={"ID":"20f8fae3-1755-461a-8748-a0033423ad5a","Type":"ContainerDied","Data":"afac57817affcc730677ddc4fa2a32dfe0dbe317a5a6f1b31a7e83564ca75eb7"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.841225 4820 scope.go:117] "RemoveContainer" containerID="0da5ee5ab8907d5144b25f7740d060ca28d4b2be0d366187088c6562cd92eb9a" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.841829 4820 scope.go:117] "RemoveContainer" containerID="afac57817affcc730677ddc4fa2a32dfe0dbe317a5a6f1b31a7e83564ca75eb7" Feb 01 14:31:30 crc kubenswrapper[4820]: E0201 14:31:30.842139 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q922s_openshift-multus(20f8fae3-1755-461a-8748-a0033423ad5a)\"" pod="openshift-multus/multus-q922s" podUID="20f8fae3-1755-461a-8748-a0033423ad5a" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.845991 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovnkube-controller/3.log" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.851390 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovn-acl-logging/0.log" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.852061 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovn-controller/0.log" Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853136 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="1fdff6e21a556b7683444220a0668c8589786a358238fba9043bb1e7ce3d8206" exitCode=0 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853182 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037" exitCode=0 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853198 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d" exitCode=0 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853215 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9" exitCode=0 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853229 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f" exitCode=0 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853243 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1" exitCode=0 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853241 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"1fdff6e21a556b7683444220a0668c8589786a358238fba9043bb1e7ce3d8206"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853316 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853344 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853363 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853380 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853416 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853257 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7" exitCode=143 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853456 4820 generic.go:334] "Generic (PLEG): container finished" podID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerID="0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6" exitCode=143 Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.853485 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6"} Feb 01 14:31:30 crc kubenswrapper[4820]: I0201 14:31:30.890169 4820 scope.go:117] "RemoveContainer" containerID="8f79dbe742b26404c994277986f796d382ed0b76cec41b8e23c39cfa39c98331" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.023756 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovn-acl-logging/0.log" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.024344 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovn-controller/0.log" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.024993 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.104348 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dcw5v"] Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.104712 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="nbdb" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.104739 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="nbdb" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.104765 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.104784 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.104806 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kube-rbac-proxy-node" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.104824 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kube-rbac-proxy-node" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.104845 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovn-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.104861 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovn-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.105005 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="sbdb" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105085 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="sbdb" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.105106 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105122 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.105138 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="northd" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105154 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="northd" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.105181 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105196 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.105219 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovn-acl-logging" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105235 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovn-acl-logging" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.105257 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105273 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.105295 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105311 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.105337 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kubecfg-setup" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105353 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kubecfg-setup" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105563 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105586 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="nbdb" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105605 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="sbdb" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105622 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovn-acl-logging" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105645 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105662 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105686 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105708 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="northd" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105730 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105758 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovn-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.105773 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="kube-rbac-proxy-node" Feb 01 14:31:31 crc kubenswrapper[4820]: E0201 14:31:31.106016 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.106040 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.106262 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" containerName="ovnkube-controller" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.110191 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136573 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-netns\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136637 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-systemd\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-log-socket\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136713 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-netd\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136760 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-var-lib-openvswitch\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136755 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136789 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-kubelet\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136825 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-ovn-kubernetes\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136864 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-ovn\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137015 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-systemd-units\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136816 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-log-socket" (OuterVolumeSpecName: "log-socket") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136835 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136909 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136866 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136913 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.136960 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137099 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137145 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lp6j\" (UniqueName: \"kubernetes.io/projected/2c428279-629a-4fd5-9955-1598ed4f6f84-kube-api-access-6lp6j\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137293 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-slash\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-node-log\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-bin\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137506 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-script-lib\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137542 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-etc-openvswitch\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137574 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-env-overrides\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c428279-629a-4fd5-9955-1598ed4f6f84-ovn-node-metrics-cert\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137652 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-config\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.137684 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-openvswitch\") pod \"2c428279-629a-4fd5-9955-1598ed4f6f84\" (UID: \"2c428279-629a-4fd5-9955-1598ed4f6f84\") " Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138035 4820 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138056 4820 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-log-socket\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138075 4820 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138092 4820 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138109 4820 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138125 4820 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138143 4820 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138162 4820 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138180 4820 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138226 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138265 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-slash" (OuterVolumeSpecName: "host-slash") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138322 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138391 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-node-log" (OuterVolumeSpecName: "node-log") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138438 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.138866 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.139583 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.139671 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.143860 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c428279-629a-4fd5-9955-1598ed4f6f84-kube-api-access-6lp6j" (OuterVolumeSpecName: "kube-api-access-6lp6j") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "kube-api-access-6lp6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.144327 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c428279-629a-4fd5-9955-1598ed4f6f84-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.154925 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2c428279-629a-4fd5-9955-1598ed4f6f84" (UID: "2c428279-629a-4fd5-9955-1598ed4f6f84"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.239212 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-kubelet\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.239512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.239708 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qlw\" (UniqueName: \"kubernetes.io/projected/d3415bad-e125-49ef-807f-8748745ce5d3-kube-api-access-w4qlw\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.239974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-ovn\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.240248 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-var-lib-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.240483 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-systemd\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.240669 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d3415bad-e125-49ef-807f-8748745ce5d3-ovn-node-metrics-cert\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.240830 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-etc-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.241082 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-node-log\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.241269 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-log-socket\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.241430 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-ovnkube-script-lib\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.241607 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-run-netns\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.241792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-slash\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.242022 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-ovnkube-config\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.242189 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-cni-netd\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.242431 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-env-overrides\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.242629 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.242803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-systemd-units\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.243039 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.243232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-cni-bin\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.243433 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lp6j\" (UniqueName: \"kubernetes.io/projected/2c428279-629a-4fd5-9955-1598ed4f6f84-kube-api-access-6lp6j\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.243698 4820 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-slash\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.243849 4820 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-node-log\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.244068 4820 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.244209 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.244399 4820 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.244549 4820 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.244676 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c428279-629a-4fd5-9955-1598ed4f6f84-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.244841 4820 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c428279-629a-4fd5-9955-1598ed4f6f84-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.245046 4820 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.245180 4820 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c428279-629a-4fd5-9955-1598ed4f6f84-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.346050 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-ovn\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.346363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-var-lib-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.346449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-var-lib-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.346243 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-ovn\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.346623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-systemd\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.346481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-systemd\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.347023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-etc-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.347159 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-etc-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.347302 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d3415bad-e125-49ef-807f-8748745ce5d3-ovn-node-metrics-cert\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.347447 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-node-log\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.347590 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-log-socket\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.347709 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-run-netns\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.347853 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-ovnkube-script-lib\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348026 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-slash\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348169 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-node-log\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-cni-netd\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348288 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-ovnkube-config\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-slash\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348354 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-env-overrides\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-log-socket\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348520 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-systemd-units\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348573 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348649 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-cni-bin\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348753 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-kubelet\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348810 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qlw\" (UniqueName: \"kubernetes.io/projected/d3415bad-e125-49ef-807f-8748745ce5d3-kube-api-access-w4qlw\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.348859 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349079 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-run-openvswitch\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349153 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349214 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-systemd-units\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349276 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349340 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-cni-bin\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-kubelet\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349498 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-env-overrides\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-ovnkube-config\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-run-netns\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.349956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3415bad-e125-49ef-807f-8748745ce5d3-host-cni-netd\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.350666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d3415bad-e125-49ef-807f-8748745ce5d3-ovn-node-metrics-cert\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.352187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d3415bad-e125-49ef-807f-8748745ce5d3-ovnkube-script-lib\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.368077 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qlw\" (UniqueName: \"kubernetes.io/projected/d3415bad-e125-49ef-807f-8748745ce5d3-kube-api-access-w4qlw\") pod \"ovnkube-node-dcw5v\" (UID: \"d3415bad-e125-49ef-807f-8748745ce5d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.433425 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.860646 4820 generic.go:334] "Generic (PLEG): container finished" podID="d3415bad-e125-49ef-807f-8748745ce5d3" containerID="5842894d0b28870c54a67368b2110e90aaf4dfa7eada4d1a9bb61fa37c30d14a" exitCode=0 Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.860715 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerDied","Data":"5842894d0b28870c54a67368b2110e90aaf4dfa7eada4d1a9bb61fa37c30d14a"} Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.860763 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"4d57a12285270042f218f340e068d18589f6ff543da61c1af11a8d1ef928f173"} Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.868367 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovn-acl-logging/0.log" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.869556 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m4skx_2c428279-629a-4fd5-9955-1598ed4f6f84/ovn-controller/0.log" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.869992 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" event={"ID":"2c428279-629a-4fd5-9955-1598ed4f6f84","Type":"ContainerDied","Data":"f6638deac942a682a1833bbecc88e4131a6825f9cddceeee12cc776f30370fa8"} Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.870023 4820 scope.go:117] "RemoveContainer" containerID="1fdff6e21a556b7683444220a0668c8589786a358238fba9043bb1e7ce3d8206" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.870086 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m4skx" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.874718 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/2.log" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.899359 4820 scope.go:117] "RemoveContainer" containerID="c23aee92cafe750112261300a774a99e38a57de0bd3aea2e318970b5d3a88037" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.914699 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m4skx"] Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.917898 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m4skx"] Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.946634 4820 scope.go:117] "RemoveContainer" containerID="92e428a6ff9695311f6c0a69a96ef9963b34aaf26d7700c83e707e80ecbb846d" Feb 01 14:31:31 crc kubenswrapper[4820]: I0201 14:31:31.971653 4820 scope.go:117] "RemoveContainer" containerID="016cdbf990625d5e80c474d206ec2fb15c38477bc0414fc13831e5c86f2033e9" Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.013493 4820 scope.go:117] "RemoveContainer" containerID="b1264ba24aebbc264713401b18880c4bc4af7db19b15812cf7ca6c67ac1e208f" Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.025636 4820 scope.go:117] "RemoveContainer" containerID="eadb220bfda1ebc2c8c43c07d282465f0698b0687c0e3b8be482a1ee8fe7e0f1" Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.045537 4820 scope.go:117] "RemoveContainer" containerID="c89a0d4c3f64dc337cb2efb5ab75a500572b28b81befcc517a79327d6d0900b7" Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.059046 4820 scope.go:117] "RemoveContainer" containerID="0edaa447f1e19e2a55c649112bf415bf206a2fbf631d34af3d4b0de39bae7af6" Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.070935 4820 scope.go:117] "RemoveContainer" containerID="99955a26a6bb8a3587c89bbe2f63ce88ab201685ccbab2c37b9ee7b54544bc90" Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.882135 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"1b9729df66b7ab8b15040a0d3882817471478030dafbdfcb017d4ea2698dd9e4"} Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.882218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"1f7ed8dc8a77aef57f4a543be40f0db34410c7c880619cdaac48989fe5a692e4"} Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.882257 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"45a718b1bd06c21a984f9ff5494a1924b23d2aec9623572d5b98a67313d9cf94"} Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.882285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"194e456f66ad26a84adfa4ff7a4264598c37603b7c35f5961bf438c15ec80e82"} Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.882310 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"8b780c6e41fbe4ed0b85514f617f248f031f4c1e46c8f28dfb18d13872f3dd54"} Feb 01 14:31:32 crc kubenswrapper[4820]: I0201 14:31:32.882333 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"3304c83036c0c19f77e814fa488933db9571ff6c1ae36e84a0572d2d8b7b1fb6"} Feb 01 14:31:33 crc kubenswrapper[4820]: I0201 14:31:33.204954 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c428279-629a-4fd5-9955-1598ed4f6f84" path="/var/lib/kubelet/pods/2c428279-629a-4fd5-9955-1598ed4f6f84/volumes" Feb 01 14:31:34 crc kubenswrapper[4820]: I0201 14:31:34.899032 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"c16da65cc05ed5209a1bf0911ad2f6efc71e0c5176e465e7902351ecb4c29922"} Feb 01 14:31:37 crc kubenswrapper[4820]: I0201 14:31:37.928105 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" event={"ID":"d3415bad-e125-49ef-807f-8748745ce5d3","Type":"ContainerStarted","Data":"eeb74fb2ef84445f94b253587c88c915d820bab744bcbc403e7e1c38163fcf7c"} Feb 01 14:31:37 crc kubenswrapper[4820]: I0201 14:31:37.928521 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:37 crc kubenswrapper[4820]: I0201 14:31:37.928586 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:37 crc kubenswrapper[4820]: I0201 14:31:37.957299 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" podStartSLOduration=6.957276897 podStartE2EDuration="6.957276897s" podCreationTimestamp="2026-02-01 14:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:31:37.953611557 +0000 UTC m=+639.473977851" watchObservedRunningTime="2026-02-01 14:31:37.957276897 +0000 UTC m=+639.477643181" Feb 01 14:31:37 crc kubenswrapper[4820]: I0201 14:31:37.962382 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:38 crc kubenswrapper[4820]: I0201 14:31:38.941353 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:38 crc kubenswrapper[4820]: I0201 14:31:38.979412 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:31:45 crc kubenswrapper[4820]: I0201 14:31:45.199216 4820 scope.go:117] "RemoveContainer" containerID="afac57817affcc730677ddc4fa2a32dfe0dbe317a5a6f1b31a7e83564ca75eb7" Feb 01 14:31:45 crc kubenswrapper[4820]: E0201 14:31:45.199988 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q922s_openshift-multus(20f8fae3-1755-461a-8748-a0033423ad5a)\"" pod="openshift-multus/multus-q922s" podUID="20f8fae3-1755-461a-8748-a0033423ad5a" Feb 01 14:32:00 crc kubenswrapper[4820]: I0201 14:32:00.198651 4820 scope.go:117] "RemoveContainer" containerID="afac57817affcc730677ddc4fa2a32dfe0dbe317a5a6f1b31a7e83564ca75eb7" Feb 01 14:32:01 crc kubenswrapper[4820]: I0201 14:32:01.077749 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q922s_20f8fae3-1755-461a-8748-a0033423ad5a/kube-multus/2.log" Feb 01 14:32:01 crc kubenswrapper[4820]: I0201 14:32:01.078453 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q922s" event={"ID":"20f8fae3-1755-461a-8748-a0033423ad5a","Type":"ContainerStarted","Data":"68809ed3e90dce6f3b8fc5e293b12387c7c040be58c4cf6675192b70dacba832"} Feb 01 14:32:01 crc kubenswrapper[4820]: I0201 14:32:01.458236 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dcw5v" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.293430 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p"] Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.367454 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.370230 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.384068 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p"] Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.469176 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.469340 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.469430 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbm4j\" (UniqueName: \"kubernetes.io/projected/11ee074d-38c2-43a0-8d78-f1860a212744-kube-api-access-cbm4j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.571077 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.571132 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.571190 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbm4j\" (UniqueName: \"kubernetes.io/projected/11ee074d-38c2-43a0-8d78-f1860a212744-kube-api-access-cbm4j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.571612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.571625 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.590310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbm4j\" (UniqueName: \"kubernetes.io/projected/11ee074d-38c2-43a0-8d78-f1860a212744-kube-api-access-cbm4j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.699452 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:07 crc kubenswrapper[4820]: I0201 14:32:07.885079 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p"] Feb 01 14:32:07 crc kubenswrapper[4820]: W0201 14:32:07.889107 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11ee074d_38c2_43a0_8d78_f1860a212744.slice/crio-2edccbc372dab8cf4788d5efbc0f7cd9a64694c99ceb61e1849bc281221a52d4 WatchSource:0}: Error finding container 2edccbc372dab8cf4788d5efbc0f7cd9a64694c99ceb61e1849bc281221a52d4: Status 404 returned error can't find the container with id 2edccbc372dab8cf4788d5efbc0f7cd9a64694c99ceb61e1849bc281221a52d4 Feb 01 14:32:08 crc kubenswrapper[4820]: I0201 14:32:08.117197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" event={"ID":"11ee074d-38c2-43a0-8d78-f1860a212744","Type":"ContainerStarted","Data":"069ed87856ffd858faaf5f60b22f809c43f6ede7aed84c3c28fe7c250f991cb0"} Feb 01 14:32:08 crc kubenswrapper[4820]: I0201 14:32:08.117254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" event={"ID":"11ee074d-38c2-43a0-8d78-f1860a212744","Type":"ContainerStarted","Data":"2edccbc372dab8cf4788d5efbc0f7cd9a64694c99ceb61e1849bc281221a52d4"} Feb 01 14:32:09 crc kubenswrapper[4820]: I0201 14:32:09.125720 4820 generic.go:334] "Generic (PLEG): container finished" podID="11ee074d-38c2-43a0-8d78-f1860a212744" containerID="069ed87856ffd858faaf5f60b22f809c43f6ede7aed84c3c28fe7c250f991cb0" exitCode=0 Feb 01 14:32:09 crc kubenswrapper[4820]: I0201 14:32:09.125774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" event={"ID":"11ee074d-38c2-43a0-8d78-f1860a212744","Type":"ContainerDied","Data":"069ed87856ffd858faaf5f60b22f809c43f6ede7aed84c3c28fe7c250f991cb0"} Feb 01 14:32:11 crc kubenswrapper[4820]: I0201 14:32:11.137567 4820 generic.go:334] "Generic (PLEG): container finished" podID="11ee074d-38c2-43a0-8d78-f1860a212744" containerID="c6e4f161d227b8d26d9f4a2fde0aba96560ed68ae32e2a7ea51f4dadec11a4ea" exitCode=0 Feb 01 14:32:11 crc kubenswrapper[4820]: I0201 14:32:11.137655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" event={"ID":"11ee074d-38c2-43a0-8d78-f1860a212744","Type":"ContainerDied","Data":"c6e4f161d227b8d26d9f4a2fde0aba96560ed68ae32e2a7ea51f4dadec11a4ea"} Feb 01 14:32:12 crc kubenswrapper[4820]: I0201 14:32:12.145511 4820 generic.go:334] "Generic (PLEG): container finished" podID="11ee074d-38c2-43a0-8d78-f1860a212744" containerID="00f04a5dbb0ef97d64b2a2cd02086362e605672137cd755b337c486effdaf13f" exitCode=0 Feb 01 14:32:12 crc kubenswrapper[4820]: I0201 14:32:12.145563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" event={"ID":"11ee074d-38c2-43a0-8d78-f1860a212744","Type":"ContainerDied","Data":"00f04a5dbb0ef97d64b2a2cd02086362e605672137cd755b337c486effdaf13f"} Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.368985 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.557359 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-bundle\") pod \"11ee074d-38c2-43a0-8d78-f1860a212744\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.557544 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-util\") pod \"11ee074d-38c2-43a0-8d78-f1860a212744\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.557573 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbm4j\" (UniqueName: \"kubernetes.io/projected/11ee074d-38c2-43a0-8d78-f1860a212744-kube-api-access-cbm4j\") pod \"11ee074d-38c2-43a0-8d78-f1860a212744\" (UID: \"11ee074d-38c2-43a0-8d78-f1860a212744\") " Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.558541 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-bundle" (OuterVolumeSpecName: "bundle") pod "11ee074d-38c2-43a0-8d78-f1860a212744" (UID: "11ee074d-38c2-43a0-8d78-f1860a212744"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.565249 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ee074d-38c2-43a0-8d78-f1860a212744-kube-api-access-cbm4j" (OuterVolumeSpecName: "kube-api-access-cbm4j") pod "11ee074d-38c2-43a0-8d78-f1860a212744" (UID: "11ee074d-38c2-43a0-8d78-f1860a212744"). InnerVolumeSpecName "kube-api-access-cbm4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.567588 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-util" (OuterVolumeSpecName: "util") pod "11ee074d-38c2-43a0-8d78-f1860a212744" (UID: "11ee074d-38c2-43a0-8d78-f1860a212744"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.658195 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-util\") on node \"crc\" DevicePath \"\"" Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.658524 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbm4j\" (UniqueName: \"kubernetes.io/projected/11ee074d-38c2-43a0-8d78-f1860a212744-kube-api-access-cbm4j\") on node \"crc\" DevicePath \"\"" Feb 01 14:32:14 crc kubenswrapper[4820]: I0201 14:32:14.658536 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11ee074d-38c2-43a0-8d78-f1860a212744-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:32:15 crc kubenswrapper[4820]: I0201 14:32:15.164975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" event={"ID":"11ee074d-38c2-43a0-8d78-f1860a212744","Type":"ContainerDied","Data":"2edccbc372dab8cf4788d5efbc0f7cd9a64694c99ceb61e1849bc281221a52d4"} Feb 01 14:32:15 crc kubenswrapper[4820]: I0201 14:32:15.165022 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2edccbc372dab8cf4788d5efbc0f7cd9a64694c99ceb61e1849bc281221a52d4" Feb 01 14:32:15 crc kubenswrapper[4820]: I0201 14:32:15.165046 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.857937 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tlrts"] Feb 01 14:32:18 crc kubenswrapper[4820]: E0201 14:32:18.858463 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ee074d-38c2-43a0-8d78-f1860a212744" containerName="util" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.858475 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ee074d-38c2-43a0-8d78-f1860a212744" containerName="util" Feb 01 14:32:18 crc kubenswrapper[4820]: E0201 14:32:18.858486 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ee074d-38c2-43a0-8d78-f1860a212744" containerName="extract" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.858491 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ee074d-38c2-43a0-8d78-f1860a212744" containerName="extract" Feb 01 14:32:18 crc kubenswrapper[4820]: E0201 14:32:18.858507 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ee074d-38c2-43a0-8d78-f1860a212744" containerName="pull" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.858513 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ee074d-38c2-43a0-8d78-f1860a212744" containerName="pull" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.858606 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ee074d-38c2-43a0-8d78-f1860a212744" containerName="extract" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.858975 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tlrts" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.861129 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.861354 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9qc5n" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.861600 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 01 14:32:18 crc kubenswrapper[4820]: I0201 14:32:18.876291 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tlrts"] Feb 01 14:32:19 crc kubenswrapper[4820]: I0201 14:32:19.006841 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scjrn\" (UniqueName: \"kubernetes.io/projected/33e9fac7-99b7-41d3-beb0-948c190885c3-kube-api-access-scjrn\") pod \"nmstate-operator-646758c888-tlrts\" (UID: \"33e9fac7-99b7-41d3-beb0-948c190885c3\") " pod="openshift-nmstate/nmstate-operator-646758c888-tlrts" Feb 01 14:32:19 crc kubenswrapper[4820]: I0201 14:32:19.108603 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scjrn\" (UniqueName: \"kubernetes.io/projected/33e9fac7-99b7-41d3-beb0-948c190885c3-kube-api-access-scjrn\") pod \"nmstate-operator-646758c888-tlrts\" (UID: \"33e9fac7-99b7-41d3-beb0-948c190885c3\") " pod="openshift-nmstate/nmstate-operator-646758c888-tlrts" Feb 01 14:32:19 crc kubenswrapper[4820]: I0201 14:32:19.128401 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scjrn\" (UniqueName: \"kubernetes.io/projected/33e9fac7-99b7-41d3-beb0-948c190885c3-kube-api-access-scjrn\") pod \"nmstate-operator-646758c888-tlrts\" (UID: \"33e9fac7-99b7-41d3-beb0-948c190885c3\") " pod="openshift-nmstate/nmstate-operator-646758c888-tlrts" Feb 01 14:32:19 crc kubenswrapper[4820]: I0201 14:32:19.177153 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tlrts" Feb 01 14:32:19 crc kubenswrapper[4820]: I0201 14:32:19.403105 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tlrts"] Feb 01 14:32:20 crc kubenswrapper[4820]: I0201 14:32:20.213245 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tlrts" event={"ID":"33e9fac7-99b7-41d3-beb0-948c190885c3","Type":"ContainerStarted","Data":"d3ac0a22cfbba255a92e44bdaa32b3620253a8e9796a4948a241cc39118b1c0f"} Feb 01 14:32:22 crc kubenswrapper[4820]: I0201 14:32:22.229103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tlrts" event={"ID":"33e9fac7-99b7-41d3-beb0-948c190885c3","Type":"ContainerStarted","Data":"2700576c1689909ca78ae465a31f3d0119109461a63601698394639b3320f9a8"} Feb 01 14:32:22 crc kubenswrapper[4820]: I0201 14:32:22.257194 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-tlrts" podStartSLOduration=1.987854879 podStartE2EDuration="4.257176348s" podCreationTimestamp="2026-02-01 14:32:18 +0000 UTC" firstStartedPulling="2026-02-01 14:32:19.411865685 +0000 UTC m=+680.932231969" lastFinishedPulling="2026-02-01 14:32:21.681187154 +0000 UTC m=+683.201553438" observedRunningTime="2026-02-01 14:32:22.252135544 +0000 UTC m=+683.772501848" watchObservedRunningTime="2026-02-01 14:32:22.257176348 +0000 UTC m=+683.777542642" Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.874550 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bcw4x"] Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.876425 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.878605 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-49snl" Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.880423 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9"] Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.883935 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.889766 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.902916 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bcw4x"] Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.912198 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9"] Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.916768 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-xnhkw"] Feb 01 14:32:27 crc kubenswrapper[4820]: I0201 14:32:27.917422 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.028010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mxm5\" (UniqueName: \"kubernetes.io/projected/ba7e9c14-2225-4cff-93c0-dc5988f425f0-kube-api-access-8mxm5\") pod \"nmstate-webhook-8474b5b9d8-szpq9\" (UID: \"ba7e9c14-2225-4cff-93c0-dc5988f425f0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.028167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdtr\" (UniqueName: \"kubernetes.io/projected/f7ada872-1aff-4471-bd66-ec3c2ad0e069-kube-api-access-8jdtr\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.028278 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba7e9c14-2225-4cff-93c0-dc5988f425f0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-szpq9\" (UID: \"ba7e9c14-2225-4cff-93c0-dc5988f425f0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.028340 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-nmstate-lock\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.028370 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-ovs-socket\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.028437 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbl8\" (UniqueName: \"kubernetes.io/projected/2c09606f-2d9d-471c-a638-d7d9aef056eb-kube-api-access-5rbl8\") pod \"nmstate-metrics-54757c584b-bcw4x\" (UID: \"2c09606f-2d9d-471c-a638-d7d9aef056eb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.028496 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-dbus-socket\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.048455 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j"] Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.049364 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.051520 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.051692 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.053133 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2wx45" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.063082 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j"] Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mxm5\" (UniqueName: \"kubernetes.io/projected/ba7e9c14-2225-4cff-93c0-dc5988f425f0-kube-api-access-8mxm5\") pod \"nmstate-webhook-8474b5b9d8-szpq9\" (UID: \"ba7e9c14-2225-4cff-93c0-dc5988f425f0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130132 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jdtr\" (UniqueName: \"kubernetes.io/projected/f7ada872-1aff-4471-bd66-ec3c2ad0e069-kube-api-access-8jdtr\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba7e9c14-2225-4cff-93c0-dc5988f425f0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-szpq9\" (UID: \"ba7e9c14-2225-4cff-93c0-dc5988f425f0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130178 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-nmstate-lock\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-ovs-socket\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130234 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rbl8\" (UniqueName: \"kubernetes.io/projected/2c09606f-2d9d-471c-a638-d7d9aef056eb-kube-api-access-5rbl8\") pod \"nmstate-metrics-54757c584b-bcw4x\" (UID: \"2c09606f-2d9d-471c-a638-d7d9aef056eb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130292 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-nmstate-lock\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: E0201 14:32:28.130337 4820 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130348 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-ovs-socket\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: E0201 14:32:28.130395 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba7e9c14-2225-4cff-93c0-dc5988f425f0-tls-key-pair podName:ba7e9c14-2225-4cff-93c0-dc5988f425f0 nodeName:}" failed. No retries permitted until 2026-02-01 14:32:28.630374716 +0000 UTC m=+690.150741000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ba7e9c14-2225-4cff-93c0-dc5988f425f0-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-szpq9" (UID: "ba7e9c14-2225-4cff-93c0-dc5988f425f0") : secret "openshift-nmstate-webhook" not found Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130364 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-dbus-socket\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.130631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7ada872-1aff-4471-bd66-ec3c2ad0e069-dbus-socket\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.150845 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mxm5\" (UniqueName: \"kubernetes.io/projected/ba7e9c14-2225-4cff-93c0-dc5988f425f0-kube-api-access-8mxm5\") pod \"nmstate-webhook-8474b5b9d8-szpq9\" (UID: \"ba7e9c14-2225-4cff-93c0-dc5988f425f0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.157136 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rbl8\" (UniqueName: \"kubernetes.io/projected/2c09606f-2d9d-471c-a638-d7d9aef056eb-kube-api-access-5rbl8\") pod \"nmstate-metrics-54757c584b-bcw4x\" (UID: \"2c09606f-2d9d-471c-a638-d7d9aef056eb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.161219 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jdtr\" (UniqueName: \"kubernetes.io/projected/f7ada872-1aff-4471-bd66-ec3c2ad0e069-kube-api-access-8jdtr\") pod \"nmstate-handler-xnhkw\" (UID: \"f7ada872-1aff-4471-bd66-ec3c2ad0e069\") " pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.208618 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.234576 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/43cf9246-6240-46c1-abba-9e2fcfaa8d15-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.234900 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/43cf9246-6240-46c1-abba-9e2fcfaa8d15-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.234999 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngfl\" (UniqueName: \"kubernetes.io/projected/43cf9246-6240-46c1-abba-9e2fcfaa8d15-kube-api-access-pngfl\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.239902 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.242556 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cfbb7769c-x2kxf"] Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.243224 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.256066 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cfbb7769c-x2kxf"] Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.337379 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/43cf9246-6240-46c1-abba-9e2fcfaa8d15-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.337689 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngfl\" (UniqueName: \"kubernetes.io/projected/43cf9246-6240-46c1-abba-9e2fcfaa8d15-kube-api-access-pngfl\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: E0201 14:32:28.337827 4820 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 01 14:32:28 crc kubenswrapper[4820]: E0201 14:32:28.337889 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43cf9246-6240-46c1-abba-9e2fcfaa8d15-plugin-serving-cert podName:43cf9246-6240-46c1-abba-9e2fcfaa8d15 nodeName:}" failed. No retries permitted until 2026-02-01 14:32:28.837860457 +0000 UTC m=+690.358226741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/43cf9246-6240-46c1-abba-9e2fcfaa8d15-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-s725j" (UID: "43cf9246-6240-46c1-abba-9e2fcfaa8d15") : secret "plugin-serving-cert" not found Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.338184 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/43cf9246-6240-46c1-abba-9e2fcfaa8d15-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.339601 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/43cf9246-6240-46c1-abba-9e2fcfaa8d15-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.356271 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngfl\" (UniqueName: \"kubernetes.io/projected/43cf9246-6240-46c1-abba-9e2fcfaa8d15-kube-api-access-pngfl\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.438815 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-oauth-serving-cert\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.438887 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-trusted-ca-bundle\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.439026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-oauth-config\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.439048 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jtv\" (UniqueName: \"kubernetes.io/projected/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-kube-api-access-k7jtv\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.439087 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-service-ca\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.439110 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-config\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.439130 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-serving-cert\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.442956 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bcw4x"] Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.540348 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-service-ca\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.540401 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-config\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.540429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-serving-cert\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.540453 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-oauth-serving-cert\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.540484 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-trusted-ca-bundle\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.540535 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-oauth-config\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.540553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jtv\" (UniqueName: \"kubernetes.io/projected/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-kube-api-access-k7jtv\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.541756 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-oauth-serving-cert\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.542194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-service-ca\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.542609 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-config\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.543048 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-trusted-ca-bundle\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.544621 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-oauth-config\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.545082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-console-serving-cert\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.555629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jtv\" (UniqueName: \"kubernetes.io/projected/e4c83b25-8055-4f80-91a7-dc6269bdb3c0-kube-api-access-k7jtv\") pod \"console-5cfbb7769c-x2kxf\" (UID: \"e4c83b25-8055-4f80-91a7-dc6269bdb3c0\") " pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.607463 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.642109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba7e9c14-2225-4cff-93c0-dc5988f425f0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-szpq9\" (UID: \"ba7e9c14-2225-4cff-93c0-dc5988f425f0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.645075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba7e9c14-2225-4cff-93c0-dc5988f425f0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-szpq9\" (UID: \"ba7e9c14-2225-4cff-93c0-dc5988f425f0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.817345 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.844831 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/43cf9246-6240-46c1-abba-9e2fcfaa8d15-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.848426 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/43cf9246-6240-46c1-abba-9e2fcfaa8d15-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-s725j\" (UID: \"43cf9246-6240-46c1-abba-9e2fcfaa8d15\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.849507 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cfbb7769c-x2kxf"] Feb 01 14:32:28 crc kubenswrapper[4820]: W0201 14:32:28.855093 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4c83b25_8055_4f80_91a7_dc6269bdb3c0.slice/crio-97ceb74dcac80cf74ad8b18e2e486eb1b2e8be9cd55af21cfad486639df1ba62 WatchSource:0}: Error finding container 97ceb74dcac80cf74ad8b18e2e486eb1b2e8be9cd55af21cfad486639df1ba62: Status 404 returned error can't find the container with id 97ceb74dcac80cf74ad8b18e2e486eb1b2e8be9cd55af21cfad486639df1ba62 Feb 01 14:32:28 crc kubenswrapper[4820]: I0201 14:32:28.963236 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.018427 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9"] Feb 01 14:32:29 crc kubenswrapper[4820]: W0201 14:32:29.023348 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba7e9c14_2225_4cff_93c0_dc5988f425f0.slice/crio-70eee3e2c44e4647aa6a53dc651b36cbd15a6c3728db1bed7afdc878f883ef96 WatchSource:0}: Error finding container 70eee3e2c44e4647aa6a53dc651b36cbd15a6c3728db1bed7afdc878f883ef96: Status 404 returned error can't find the container with id 70eee3e2c44e4647aa6a53dc651b36cbd15a6c3728db1bed7afdc878f883ef96 Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.161325 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j"] Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.266844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" event={"ID":"ba7e9c14-2225-4cff-93c0-dc5988f425f0","Type":"ContainerStarted","Data":"70eee3e2c44e4647aa6a53dc651b36cbd15a6c3728db1bed7afdc878f883ef96"} Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.268008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" event={"ID":"43cf9246-6240-46c1-abba-9e2fcfaa8d15","Type":"ContainerStarted","Data":"63ef4b30475293e27949ef3c0a484d8b2a56770c19ca56287cc9afe1ec487df2"} Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.269262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" event={"ID":"2c09606f-2d9d-471c-a638-d7d9aef056eb","Type":"ContainerStarted","Data":"ecad3e6e8d5199c37abd5b2f57b7215ac25af5f9dfe923ab95d7b365ca73c910"} Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.270678 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cfbb7769c-x2kxf" event={"ID":"e4c83b25-8055-4f80-91a7-dc6269bdb3c0","Type":"ContainerStarted","Data":"f026a2b645430e6d556323600bb0f7ee18383a50e466fe6c8170bfb99ffd4a4e"} Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.270733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cfbb7769c-x2kxf" event={"ID":"e4c83b25-8055-4f80-91a7-dc6269bdb3c0","Type":"ContainerStarted","Data":"97ceb74dcac80cf74ad8b18e2e486eb1b2e8be9cd55af21cfad486639df1ba62"} Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.271523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xnhkw" event={"ID":"f7ada872-1aff-4471-bd66-ec3c2ad0e069","Type":"ContainerStarted","Data":"aa3c170be4274512e1037bd4c5d270120d575d6fe40060d8bfd11c103b4197bb"} Feb 01 14:32:29 crc kubenswrapper[4820]: I0201 14:32:29.290364 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cfbb7769c-x2kxf" podStartSLOduration=1.29034469 podStartE2EDuration="1.29034469s" podCreationTimestamp="2026-02-01 14:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:32:29.286823194 +0000 UTC m=+690.807189478" watchObservedRunningTime="2026-02-01 14:32:29.29034469 +0000 UTC m=+690.810710974" Feb 01 14:32:31 crc kubenswrapper[4820]: I0201 14:32:31.283493 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" event={"ID":"2c09606f-2d9d-471c-a638-d7d9aef056eb","Type":"ContainerStarted","Data":"9b08a69fedeafa02f6af66f5c06280c3f589e6eaf0d3639016e23d69cddf27b3"} Feb 01 14:32:31 crc kubenswrapper[4820]: I0201 14:32:31.285474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xnhkw" event={"ID":"f7ada872-1aff-4471-bd66-ec3c2ad0e069","Type":"ContainerStarted","Data":"2c1a469668e5114e6c754c53f6a673f31aa9c12b23c6369f930170cd6ec36d73"} Feb 01 14:32:31 crc kubenswrapper[4820]: I0201 14:32:31.285901 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:31 crc kubenswrapper[4820]: I0201 14:32:31.287077 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" event={"ID":"ba7e9c14-2225-4cff-93c0-dc5988f425f0","Type":"ContainerStarted","Data":"30b129d181449d0946e3e0bf2f2f473dfdb1d522bb1519b16043dc1637305a45"} Feb 01 14:32:31 crc kubenswrapper[4820]: I0201 14:32:31.287270 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:32:31 crc kubenswrapper[4820]: I0201 14:32:31.303413 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-xnhkw" podStartSLOduration=2.056068114 podStartE2EDuration="4.303391584s" podCreationTimestamp="2026-02-01 14:32:27 +0000 UTC" firstStartedPulling="2026-02-01 14:32:28.280532733 +0000 UTC m=+689.800899017" lastFinishedPulling="2026-02-01 14:32:30.527856203 +0000 UTC m=+692.048222487" observedRunningTime="2026-02-01 14:32:31.298378551 +0000 UTC m=+692.818744855" watchObservedRunningTime="2026-02-01 14:32:31.303391584 +0000 UTC m=+692.823757868" Feb 01 14:32:31 crc kubenswrapper[4820]: I0201 14:32:31.320538 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" podStartSLOduration=2.818301838 podStartE2EDuration="4.320522073s" podCreationTimestamp="2026-02-01 14:32:27 +0000 UTC" firstStartedPulling="2026-02-01 14:32:29.025387802 +0000 UTC m=+690.545754086" lastFinishedPulling="2026-02-01 14:32:30.527608037 +0000 UTC m=+692.047974321" observedRunningTime="2026-02-01 14:32:31.317303585 +0000 UTC m=+692.837669879" watchObservedRunningTime="2026-02-01 14:32:31.320522073 +0000 UTC m=+692.840888367" Feb 01 14:32:32 crc kubenswrapper[4820]: I0201 14:32:32.294658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" event={"ID":"43cf9246-6240-46c1-abba-9e2fcfaa8d15","Type":"ContainerStarted","Data":"845722d26bf42a1c0680de833338584cac1beba672e64dbe7f09e88c4c0e5978"} Feb 01 14:32:32 crc kubenswrapper[4820]: I0201 14:32:32.311584 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-s725j" podStartSLOduration=1.9629875509999999 podStartE2EDuration="4.311565291s" podCreationTimestamp="2026-02-01 14:32:28 +0000 UTC" firstStartedPulling="2026-02-01 14:32:29.170116266 +0000 UTC m=+690.690482540" lastFinishedPulling="2026-02-01 14:32:31.518693996 +0000 UTC m=+693.039060280" observedRunningTime="2026-02-01 14:32:32.308054315 +0000 UTC m=+693.828420619" watchObservedRunningTime="2026-02-01 14:32:32.311565291 +0000 UTC m=+693.831931575" Feb 01 14:32:33 crc kubenswrapper[4820]: I0201 14:32:33.301745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" event={"ID":"2c09606f-2d9d-471c-a638-d7d9aef056eb","Type":"ContainerStarted","Data":"76d78847ee2b413295a37be00d903612e7c21b72b877a3403f6dc525d2cd44ea"} Feb 01 14:32:33 crc kubenswrapper[4820]: I0201 14:32:33.317795 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-bcw4x" podStartSLOduration=1.816749294 podStartE2EDuration="6.317775211s" podCreationTimestamp="2026-02-01 14:32:27 +0000 UTC" firstStartedPulling="2026-02-01 14:32:28.448160538 +0000 UTC m=+689.968526822" lastFinishedPulling="2026-02-01 14:32:32.949186455 +0000 UTC m=+694.469552739" observedRunningTime="2026-02-01 14:32:33.314774987 +0000 UTC m=+694.835141321" watchObservedRunningTime="2026-02-01 14:32:33.317775211 +0000 UTC m=+694.838141505" Feb 01 14:32:38 crc kubenswrapper[4820]: I0201 14:32:38.262043 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xnhkw" Feb 01 14:32:38 crc kubenswrapper[4820]: I0201 14:32:38.608625 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:38 crc kubenswrapper[4820]: I0201 14:32:38.608709 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:38 crc kubenswrapper[4820]: I0201 14:32:38.612956 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:39 crc kubenswrapper[4820]: I0201 14:32:39.334490 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cfbb7769c-x2kxf" Feb 01 14:32:39 crc kubenswrapper[4820]: I0201 14:32:39.380208 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j7nmb"] Feb 01 14:32:48 crc kubenswrapper[4820]: I0201 14:32:48.823483 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-szpq9" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.049773 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff"] Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.052441 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.057232 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff"] Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.058021 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.075141 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qskz\" (UniqueName: \"kubernetes.io/projected/fc1ce558-b23f-496a-946a-d25c4fb31282-kube-api-access-9qskz\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.075219 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.075259 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.176365 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.176729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qskz\" (UniqueName: \"kubernetes.io/projected/fc1ce558-b23f-496a-946a-d25c4fb31282-kube-api-access-9qskz\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.176789 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.176832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.177218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.193979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qskz\" (UniqueName: \"kubernetes.io/projected/fc1ce558-b23f-496a-946a-d25c4fb31282-kube-api-access-9qskz\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.402212 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:02 crc kubenswrapper[4820]: I0201 14:33:02.781759 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff"] Feb 01 14:33:02 crc kubenswrapper[4820]: W0201 14:33:02.792114 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1ce558_b23f_496a_946a_d25c4fb31282.slice/crio-c87b907c47ecdef8c899003d8f41b95d2c34699abeea76cb2bd640c963d1c1b8 WatchSource:0}: Error finding container c87b907c47ecdef8c899003d8f41b95d2c34699abeea76cb2bd640c963d1c1b8: Status 404 returned error can't find the container with id c87b907c47ecdef8c899003d8f41b95d2c34699abeea76cb2bd640c963d1c1b8 Feb 01 14:33:03 crc kubenswrapper[4820]: I0201 14:33:03.470470 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerID="fda3c2ef9d56e8044977ed8defc0e1bd48d21f1dacf0a96485ff9a624af8862d" exitCode=0 Feb 01 14:33:03 crc kubenswrapper[4820]: I0201 14:33:03.470681 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" event={"ID":"fc1ce558-b23f-496a-946a-d25c4fb31282","Type":"ContainerDied","Data":"fda3c2ef9d56e8044977ed8defc0e1bd48d21f1dacf0a96485ff9a624af8862d"} Feb 01 14:33:03 crc kubenswrapper[4820]: I0201 14:33:03.470794 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" event={"ID":"fc1ce558-b23f-496a-946a-d25c4fb31282","Type":"ContainerStarted","Data":"c87b907c47ecdef8c899003d8f41b95d2c34699abeea76cb2bd640c963d1c1b8"} Feb 01 14:33:04 crc kubenswrapper[4820]: I0201 14:33:04.420509 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-j7nmb" podUID="ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" containerName="console" containerID="cri-o://b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0" gracePeriod=15 Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.805183 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j7nmb_ecb9a255-58d7-4e9e-8861-7b31bc44ed3e/console/0.log" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.805438 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.905598 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-trusted-ca-bundle\") pod \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.905649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-serving-cert\") pod \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.905675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-service-ca\") pod \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.905732 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj48d\" (UniqueName: \"kubernetes.io/projected/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-kube-api-access-tj48d\") pod \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.905799 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-oauth-serving-cert\") pod \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.905856 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-config\") pod \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.905913 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-oauth-config\") pod \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\" (UID: \"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e\") " Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.906675 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" (UID: "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.906732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-config" (OuterVolumeSpecName: "console-config") pod "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" (UID: "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.906796 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-service-ca" (OuterVolumeSpecName: "service-ca") pod "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" (UID: "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.907312 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" (UID: "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.912853 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-kube-api-access-tj48d" (OuterVolumeSpecName: "kube-api-access-tj48d") pod "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" (UID: "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e"). InnerVolumeSpecName "kube-api-access-tj48d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.912889 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" (UID: "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:04.915717 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" (UID: "ecb9a255-58d7-4e9e-8861-7b31bc44ed3e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.007274 4820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.007301 4820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.007309 4820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.007319 4820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.007327 4820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.007335 4820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.007343 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj48d\" (UniqueName: \"kubernetes.io/projected/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e-kube-api-access-tj48d\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.484051 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerID="f5e96c6c4b157236c24488bbcf567a697eec95b92f3345bad804e711b1a41164" exitCode=0 Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.484229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" event={"ID":"fc1ce558-b23f-496a-946a-d25c4fb31282","Type":"ContainerDied","Data":"f5e96c6c4b157236c24488bbcf567a697eec95b92f3345bad804e711b1a41164"} Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.486703 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j7nmb_ecb9a255-58d7-4e9e-8861-7b31bc44ed3e/console/0.log" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.486813 4820 generic.go:334] "Generic (PLEG): container finished" podID="ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" containerID="b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0" exitCode=2 Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.486891 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j7nmb" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.486984 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j7nmb" event={"ID":"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e","Type":"ContainerDied","Data":"b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0"} Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.487032 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j7nmb" event={"ID":"ecb9a255-58d7-4e9e-8861-7b31bc44ed3e","Type":"ContainerDied","Data":"df67af08971a11c33d3b0d7e6d5b8b495560cc48d4e7f9e278a09a75048545e2"} Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.487085 4820 scope.go:117] "RemoveContainer" containerID="b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.509517 4820 scope.go:117] "RemoveContainer" containerID="b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0" Feb 01 14:33:05 crc kubenswrapper[4820]: E0201 14:33:05.510161 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0\": container with ID starting with b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0 not found: ID does not exist" containerID="b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.510194 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0"} err="failed to get container status \"b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0\": rpc error: code = NotFound desc = could not find container \"b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0\": container with ID starting with b5b126410647fb7365db11227a88f9218488b2a524a021bbd6216ea494032de0 not found: ID does not exist" Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.523041 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j7nmb"] Feb 01 14:33:05 crc kubenswrapper[4820]: I0201 14:33:05.528194 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-j7nmb"] Feb 01 14:33:06 crc kubenswrapper[4820]: I0201 14:33:06.496796 4820 generic.go:334] "Generic (PLEG): container finished" podID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerID="95fc8687abbc08bec68c560053361333136642d1dec3b75698176c0315a9d723" exitCode=0 Feb 01 14:33:06 crc kubenswrapper[4820]: I0201 14:33:06.496862 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" event={"ID":"fc1ce558-b23f-496a-946a-d25c4fb31282","Type":"ContainerDied","Data":"95fc8687abbc08bec68c560053361333136642d1dec3b75698176c0315a9d723"} Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.207570 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" path="/var/lib/kubelet/pods/ecb9a255-58d7-4e9e-8861-7b31bc44ed3e/volumes" Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.713924 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.839536 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-bundle\") pod \"fc1ce558-b23f-496a-946a-d25c4fb31282\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.839647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qskz\" (UniqueName: \"kubernetes.io/projected/fc1ce558-b23f-496a-946a-d25c4fb31282-kube-api-access-9qskz\") pod \"fc1ce558-b23f-496a-946a-d25c4fb31282\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.840152 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-util\") pod \"fc1ce558-b23f-496a-946a-d25c4fb31282\" (UID: \"fc1ce558-b23f-496a-946a-d25c4fb31282\") " Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.840927 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-bundle" (OuterVolumeSpecName: "bundle") pod "fc1ce558-b23f-496a-946a-d25c4fb31282" (UID: "fc1ce558-b23f-496a-946a-d25c4fb31282"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.843419 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.853311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1ce558-b23f-496a-946a-d25c4fb31282-kube-api-access-9qskz" (OuterVolumeSpecName: "kube-api-access-9qskz") pod "fc1ce558-b23f-496a-946a-d25c4fb31282" (UID: "fc1ce558-b23f-496a-946a-d25c4fb31282"). InnerVolumeSpecName "kube-api-access-9qskz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.854995 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-util" (OuterVolumeSpecName: "util") pod "fc1ce558-b23f-496a-946a-d25c4fb31282" (UID: "fc1ce558-b23f-496a-946a-d25c4fb31282"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.944249 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qskz\" (UniqueName: \"kubernetes.io/projected/fc1ce558-b23f-496a-946a-d25c4fb31282-kube-api-access-9qskz\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:07 crc kubenswrapper[4820]: I0201 14:33:07.944317 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1ce558-b23f-496a-946a-d25c4fb31282-util\") on node \"crc\" DevicePath \"\"" Feb 01 14:33:08 crc kubenswrapper[4820]: I0201 14:33:08.510859 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" event={"ID":"fc1ce558-b23f-496a-946a-d25c4fb31282","Type":"ContainerDied","Data":"c87b907c47ecdef8c899003d8f41b95d2c34699abeea76cb2bd640c963d1c1b8"} Feb 01 14:33:08 crc kubenswrapper[4820]: I0201 14:33:08.510951 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87b907c47ecdef8c899003d8f41b95d2c34699abeea76cb2bd640c963d1c1b8" Feb 01 14:33:08 crc kubenswrapper[4820]: I0201 14:33:08.510985 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.473472 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l"] Feb 01 14:33:17 crc kubenswrapper[4820]: E0201 14:33:17.474288 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" containerName="console" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.474305 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" containerName="console" Feb 01 14:33:17 crc kubenswrapper[4820]: E0201 14:33:17.474324 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerName="extract" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.474334 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerName="extract" Feb 01 14:33:17 crc kubenswrapper[4820]: E0201 14:33:17.474346 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerName="pull" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.474354 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerName="pull" Feb 01 14:33:17 crc kubenswrapper[4820]: E0201 14:33:17.474366 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerName="util" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.474373 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerName="util" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.474490 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1ce558-b23f-496a-946a-d25c4fb31282" containerName="extract" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.474514 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb9a255-58d7-4e9e-8861-7b31bc44ed3e" containerName="console" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.474970 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.477018 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.477027 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.477788 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.477932 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wc8rr" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.477932 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.495260 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l"] Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.558784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56vd\" (UniqueName: \"kubernetes.io/projected/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-kube-api-access-m56vd\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.558869 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-apiservice-cert\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.558924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-webhook-cert\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.660648 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56vd\" (UniqueName: \"kubernetes.io/projected/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-kube-api-access-m56vd\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.660700 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-apiservice-cert\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.660730 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-webhook-cert\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.666239 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-apiservice-cert\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.678295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-webhook-cert\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.682558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56vd\" (UniqueName: \"kubernetes.io/projected/daa33ef9-cdc3-4aa9-8861-48c4175b8c99-kube-api-access-m56vd\") pod \"metallb-operator-controller-manager-7bcdb74864-zp85l\" (UID: \"daa33ef9-cdc3-4aa9-8861-48c4175b8c99\") " pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.736000 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-56944747c7-brf8w"] Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.737301 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.739553 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.739578 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.739917 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jrcxp" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.752442 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56944747c7-brf8w"] Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.761772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a374cc63-6735-4835-b5af-b1367ff52823-webhook-cert\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.761859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2b8\" (UniqueName: \"kubernetes.io/projected/a374cc63-6735-4835-b5af-b1367ff52823-kube-api-access-bw2b8\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.761904 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a374cc63-6735-4835-b5af-b1367ff52823-apiservice-cert\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.794466 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.863495 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2b8\" (UniqueName: \"kubernetes.io/projected/a374cc63-6735-4835-b5af-b1367ff52823-kube-api-access-bw2b8\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.863844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a374cc63-6735-4835-b5af-b1367ff52823-apiservice-cert\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.863904 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a374cc63-6735-4835-b5af-b1367ff52823-webhook-cert\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.869569 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a374cc63-6735-4835-b5af-b1367ff52823-webhook-cert\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.869649 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a374cc63-6735-4835-b5af-b1367ff52823-apiservice-cert\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:17 crc kubenswrapper[4820]: I0201 14:33:17.881639 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2b8\" (UniqueName: \"kubernetes.io/projected/a374cc63-6735-4835-b5af-b1367ff52823-kube-api-access-bw2b8\") pod \"metallb-operator-webhook-server-56944747c7-brf8w\" (UID: \"a374cc63-6735-4835-b5af-b1367ff52823\") " pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:18 crc kubenswrapper[4820]: I0201 14:33:18.050046 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:18 crc kubenswrapper[4820]: I0201 14:33:18.111192 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l"] Feb 01 14:33:18 crc kubenswrapper[4820]: I0201 14:33:18.481231 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56944747c7-brf8w"] Feb 01 14:33:18 crc kubenswrapper[4820]: W0201 14:33:18.487362 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda374cc63_6735_4835_b5af_b1367ff52823.slice/crio-02dfe17d6a53e1c6d86b5bda5aee34d9d676e643699b433276aef92cdfdb5aea WatchSource:0}: Error finding container 02dfe17d6a53e1c6d86b5bda5aee34d9d676e643699b433276aef92cdfdb5aea: Status 404 returned error can't find the container with id 02dfe17d6a53e1c6d86b5bda5aee34d9d676e643699b433276aef92cdfdb5aea Feb 01 14:33:18 crc kubenswrapper[4820]: I0201 14:33:18.820450 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" event={"ID":"a374cc63-6735-4835-b5af-b1367ff52823","Type":"ContainerStarted","Data":"02dfe17d6a53e1c6d86b5bda5aee34d9d676e643699b433276aef92cdfdb5aea"} Feb 01 14:33:18 crc kubenswrapper[4820]: I0201 14:33:18.821862 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" event={"ID":"daa33ef9-cdc3-4aa9-8861-48c4175b8c99","Type":"ContainerStarted","Data":"aae1e2ff7481837c5846c7892287a94e0be54d0cbee0e6d7e164abbb4ec88c6e"} Feb 01 14:33:19 crc kubenswrapper[4820]: I0201 14:33:19.242163 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:33:19 crc kubenswrapper[4820]: I0201 14:33:19.242234 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:33:23 crc kubenswrapper[4820]: I0201 14:33:23.850989 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" event={"ID":"a374cc63-6735-4835-b5af-b1367ff52823","Type":"ContainerStarted","Data":"eda519d15665aba8bbd57df5eccd20e425e9ae3c316b1654fe9eb2a7c7d5f9c4"} Feb 01 14:33:23 crc kubenswrapper[4820]: I0201 14:33:23.851626 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:23 crc kubenswrapper[4820]: I0201 14:33:23.852417 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" event={"ID":"daa33ef9-cdc3-4aa9-8861-48c4175b8c99","Type":"ContainerStarted","Data":"01aca115bc6d05cb1ceee7ef1c7aed20e6597d9ea65d052621f2e8d7f6f9323b"} Feb 01 14:33:23 crc kubenswrapper[4820]: I0201 14:33:23.852537 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:23 crc kubenswrapper[4820]: I0201 14:33:23.876825 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" podStartSLOduration=1.820898617 podStartE2EDuration="6.876807113s" podCreationTimestamp="2026-02-01 14:33:17 +0000 UTC" firstStartedPulling="2026-02-01 14:33:18.492810256 +0000 UTC m=+740.013176540" lastFinishedPulling="2026-02-01 14:33:23.548718762 +0000 UTC m=+745.069085036" observedRunningTime="2026-02-01 14:33:23.874044077 +0000 UTC m=+745.394410361" watchObservedRunningTime="2026-02-01 14:33:23.876807113 +0000 UTC m=+745.397173407" Feb 01 14:33:23 crc kubenswrapper[4820]: I0201 14:33:23.902774 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" podStartSLOduration=1.538692654 podStartE2EDuration="6.902752219s" podCreationTimestamp="2026-02-01 14:33:17 +0000 UTC" firstStartedPulling="2026-02-01 14:33:18.12595035 +0000 UTC m=+739.646316634" lastFinishedPulling="2026-02-01 14:33:23.490009915 +0000 UTC m=+745.010376199" observedRunningTime="2026-02-01 14:33:23.901308044 +0000 UTC m=+745.421674338" watchObservedRunningTime="2026-02-01 14:33:23.902752219 +0000 UTC m=+745.423118503" Feb 01 14:33:33 crc kubenswrapper[4820]: I0201 14:33:33.177226 4820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 01 14:33:38 crc kubenswrapper[4820]: I0201 14:33:38.056009 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-56944747c7-brf8w" Feb 01 14:33:49 crc kubenswrapper[4820]: I0201 14:33:49.242371 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:33:49 crc kubenswrapper[4820]: I0201 14:33:49.243163 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:33:57 crc kubenswrapper[4820]: I0201 14:33:57.797422 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7bcdb74864-zp85l" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.535912 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9l7gp"] Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.538334 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.540077 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-s8sgk" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.540238 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.540352 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.541450 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp"] Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.542127 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.543743 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.552196 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp"] Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-metrics\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-conf\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571234 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-sockets\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571257 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8f0fec3-947b-4599-8fc7-e588a982471e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k2kpp\" (UID: \"e8f0fec3-947b-4599-8fc7-e588a982471e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571287 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-startup\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571404 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bb6335f-34a1-4d93-b71c-74b5f62ea699-metrics-certs\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-reloader\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571587 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jsmb\" (UniqueName: \"kubernetes.io/projected/e8f0fec3-947b-4599-8fc7-e588a982471e-kube-api-access-8jsmb\") pod \"frr-k8s-webhook-server-7df86c4f6c-k2kpp\" (UID: \"e8f0fec3-947b-4599-8fc7-e588a982471e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.571654 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kfhf\" (UniqueName: \"kubernetes.io/projected/5bb6335f-34a1-4d93-b71c-74b5f62ea699-kube-api-access-9kfhf\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.608110 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gtf4x"] Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.609128 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.612197 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.619474 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.621819 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6kdxv" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.622085 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.644419 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-dg6dc"] Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.645577 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.651104 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.663856 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dg6dc"] Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672598 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-metrics-certs\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672646 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-cert\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672674 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-startup\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672691 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metrics-certs\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672712 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bb6335f-34a1-4d93-b71c-74b5f62ea699-metrics-certs\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672731 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-reloader\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672753 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jsmb\" (UniqueName: \"kubernetes.io/projected/e8f0fec3-947b-4599-8fc7-e588a982471e-kube-api-access-8jsmb\") pod \"frr-k8s-webhook-server-7df86c4f6c-k2kpp\" (UID: \"e8f0fec3-947b-4599-8fc7-e588a982471e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672770 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kfhf\" (UniqueName: \"kubernetes.io/projected/5bb6335f-34a1-4d93-b71c-74b5f62ea699-kube-api-access-9kfhf\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672792 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf4h\" (UniqueName: \"kubernetes.io/projected/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-kube-api-access-5zf4h\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672829 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metallb-excludel2\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672851 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-metrics\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-conf\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672902 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-sockets\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672925 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fspvm\" (UniqueName: \"kubernetes.io/projected/b1fcd8e0-6081-4cd1-9998-82a695f11d62-kube-api-access-fspvm\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.672944 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8f0fec3-947b-4599-8fc7-e588a982471e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k2kpp\" (UID: \"e8f0fec3-947b-4599-8fc7-e588a982471e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:58 crc kubenswrapper[4820]: E0201 14:33:58.673046 4820 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 01 14:33:58 crc kubenswrapper[4820]: E0201 14:33:58.673091 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8f0fec3-947b-4599-8fc7-e588a982471e-cert podName:e8f0fec3-947b-4599-8fc7-e588a982471e nodeName:}" failed. No retries permitted until 2026-02-01 14:33:59.173075258 +0000 UTC m=+780.693441542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e8f0fec3-947b-4599-8fc7-e588a982471e-cert") pod "frr-k8s-webhook-server-7df86c4f6c-k2kpp" (UID: "e8f0fec3-947b-4599-8fc7-e588a982471e") : secret "frr-k8s-webhook-server-cert" not found Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.674105 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-startup\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.675330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-reloader\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.675863 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-metrics\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.676106 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-conf\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.676311 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5bb6335f-34a1-4d93-b71c-74b5f62ea699-frr-sockets\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.681467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bb6335f-34a1-4d93-b71c-74b5f62ea699-metrics-certs\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.710496 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kfhf\" (UniqueName: \"kubernetes.io/projected/5bb6335f-34a1-4d93-b71c-74b5f62ea699-kube-api-access-9kfhf\") pod \"frr-k8s-9l7gp\" (UID: \"5bb6335f-34a1-4d93-b71c-74b5f62ea699\") " pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.752832 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jsmb\" (UniqueName: \"kubernetes.io/projected/e8f0fec3-947b-4599-8fc7-e588a982471e-kube-api-access-8jsmb\") pod \"frr-k8s-webhook-server-7df86c4f6c-k2kpp\" (UID: \"e8f0fec3-947b-4599-8fc7-e588a982471e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.774018 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fspvm\" (UniqueName: \"kubernetes.io/projected/b1fcd8e0-6081-4cd1-9998-82a695f11d62-kube-api-access-fspvm\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.774079 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-metrics-certs\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.774105 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-cert\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.774120 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metrics-certs\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.774167 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.774185 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zf4h\" (UniqueName: \"kubernetes.io/projected/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-kube-api-access-5zf4h\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.774206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metallb-excludel2\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: E0201 14:33:58.774268 4820 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 01 14:33:58 crc kubenswrapper[4820]: E0201 14:33:58.774347 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metrics-certs podName:b1fcd8e0-6081-4cd1-9998-82a695f11d62 nodeName:}" failed. No retries permitted until 2026-02-01 14:33:59.274326822 +0000 UTC m=+780.794693106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metrics-certs") pod "speaker-gtf4x" (UID: "b1fcd8e0-6081-4cd1-9998-82a695f11d62") : secret "speaker-certs-secret" not found Feb 01 14:33:58 crc kubenswrapper[4820]: E0201 14:33:58.774347 4820 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 01 14:33:58 crc kubenswrapper[4820]: E0201 14:33:58.774423 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist podName:b1fcd8e0-6081-4cd1-9998-82a695f11d62 nodeName:}" failed. No retries permitted until 2026-02-01 14:33:59.274404754 +0000 UTC m=+780.794771038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist") pod "speaker-gtf4x" (UID: "b1fcd8e0-6081-4cd1-9998-82a695f11d62") : secret "metallb-memberlist" not found Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.774839 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metallb-excludel2\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.777064 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-metrics-certs\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.785272 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-cert\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.790719 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zf4h\" (UniqueName: \"kubernetes.io/projected/b8c2d8f0-d9c2-4b9e-802f-ec6a34533357-kube-api-access-5zf4h\") pod \"controller-6968d8fdc4-dg6dc\" (UID: \"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357\") " pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.794371 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fspvm\" (UniqueName: \"kubernetes.io/projected/b1fcd8e0-6081-4cd1-9998-82a695f11d62-kube-api-access-fspvm\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.858803 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:33:58 crc kubenswrapper[4820]: I0201 14:33:58.957978 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.038564 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerStarted","Data":"2112152888c04d3727b669ed6868ebc04da77268d1870b08897239b3dc53f7d8"} Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.179907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8f0fec3-947b-4599-8fc7-e588a982471e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k2kpp\" (UID: \"e8f0fec3-947b-4599-8fc7-e588a982471e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.185485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8f0fec3-947b-4599-8fc7-e588a982471e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k2kpp\" (UID: \"e8f0fec3-947b-4599-8fc7-e588a982471e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.281212 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metrics-certs\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.281288 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:59 crc kubenswrapper[4820]: E0201 14:33:59.281385 4820 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 01 14:33:59 crc kubenswrapper[4820]: E0201 14:33:59.281479 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist podName:b1fcd8e0-6081-4cd1-9998-82a695f11d62 nodeName:}" failed. No retries permitted until 2026-02-01 14:34:00.281462795 +0000 UTC m=+781.801829079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist") pod "speaker-gtf4x" (UID: "b1fcd8e0-6081-4cd1-9998-82a695f11d62") : secret "metallb-memberlist" not found Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.285308 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-metrics-certs\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.406171 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dg6dc"] Feb 01 14:33:59 crc kubenswrapper[4820]: W0201 14:33:59.409202 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8c2d8f0_d9c2_4b9e_802f_ec6a34533357.slice/crio-52a3a52667a53e24694322556b4b85e735a6d2eb6717c302969cd2f73c6ca90b WatchSource:0}: Error finding container 52a3a52667a53e24694322556b4b85e735a6d2eb6717c302969cd2f73c6ca90b: Status 404 returned error can't find the container with id 52a3a52667a53e24694322556b4b85e735a6d2eb6717c302969cd2f73c6ca90b Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.466797 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:33:59 crc kubenswrapper[4820]: I0201 14:33:59.688285 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp"] Feb 01 14:33:59 crc kubenswrapper[4820]: W0201 14:33:59.699644 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8f0fec3_947b_4599_8fc7_e588a982471e.slice/crio-9d509cbd513727d38d0caa25625601dcafe871984d2ffa3b3c5077c57543223d WatchSource:0}: Error finding container 9d509cbd513727d38d0caa25625601dcafe871984d2ffa3b3c5077c57543223d: Status 404 returned error can't find the container with id 9d509cbd513727d38d0caa25625601dcafe871984d2ffa3b3c5077c57543223d Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.044150 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dg6dc" event={"ID":"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357","Type":"ContainerStarted","Data":"12916da15003a11598ef49d26584034bd98deed379519c450eec62a583e34c0f"} Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.044215 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dg6dc" event={"ID":"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357","Type":"ContainerStarted","Data":"160f05df2ae07c9f51132fedf446d38f4d461bd0ab3fcd4a58c258ed1d23bc2f"} Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.044243 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dg6dc" event={"ID":"b8c2d8f0-d9c2-4b9e-802f-ec6a34533357","Type":"ContainerStarted","Data":"52a3a52667a53e24694322556b4b85e735a6d2eb6717c302969cd2f73c6ca90b"} Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.044984 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.045962 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" event={"ID":"e8f0fec3-947b-4599-8fc7-e588a982471e","Type":"ContainerStarted","Data":"9d509cbd513727d38d0caa25625601dcafe871984d2ffa3b3c5077c57543223d"} Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.061474 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-dg6dc" podStartSLOduration=2.061456955 podStartE2EDuration="2.061456955s" podCreationTimestamp="2026-02-01 14:33:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:34:00.059092148 +0000 UTC m=+781.579458432" watchObservedRunningTime="2026-02-01 14:34:00.061456955 +0000 UTC m=+781.581823239" Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.293127 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.299645 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1fcd8e0-6081-4cd1-9998-82a695f11d62-memberlist\") pod \"speaker-gtf4x\" (UID: \"b1fcd8e0-6081-4cd1-9998-82a695f11d62\") " pod="metallb-system/speaker-gtf4x" Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.427136 4820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6kdxv" Feb 01 14:34:00 crc kubenswrapper[4820]: I0201 14:34:00.436189 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gtf4x" Feb 01 14:34:00 crc kubenswrapper[4820]: W0201 14:34:00.454906 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1fcd8e0_6081_4cd1_9998_82a695f11d62.slice/crio-af46e36c7d3987b5bbbc70a10a90a349207e2f5ceee106a1f7a5f4531acf224d WatchSource:0}: Error finding container af46e36c7d3987b5bbbc70a10a90a349207e2f5ceee106a1f7a5f4531acf224d: Status 404 returned error can't find the container with id af46e36c7d3987b5bbbc70a10a90a349207e2f5ceee106a1f7a5f4531acf224d Feb 01 14:34:01 crc kubenswrapper[4820]: I0201 14:34:01.054488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gtf4x" event={"ID":"b1fcd8e0-6081-4cd1-9998-82a695f11d62","Type":"ContainerStarted","Data":"8840c5be4884ca78e7da6464e61ce410b51fe2fe4beee17c369776d5733423ac"} Feb 01 14:34:01 crc kubenswrapper[4820]: I0201 14:34:01.054836 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gtf4x" event={"ID":"b1fcd8e0-6081-4cd1-9998-82a695f11d62","Type":"ContainerStarted","Data":"08429fbbb41a53e61fa0dbfa02255078658635d25b412c46a93a9ca26d4c774f"} Feb 01 14:34:01 crc kubenswrapper[4820]: I0201 14:34:01.054845 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gtf4x" event={"ID":"b1fcd8e0-6081-4cd1-9998-82a695f11d62","Type":"ContainerStarted","Data":"af46e36c7d3987b5bbbc70a10a90a349207e2f5ceee106a1f7a5f4531acf224d"} Feb 01 14:34:01 crc kubenswrapper[4820]: I0201 14:34:01.055347 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gtf4x" Feb 01 14:34:01 crc kubenswrapper[4820]: I0201 14:34:01.082247 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gtf4x" podStartSLOduration=3.082233048 podStartE2EDuration="3.082233048s" podCreationTimestamp="2026-02-01 14:33:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:34:01.080606169 +0000 UTC m=+782.600972453" watchObservedRunningTime="2026-02-01 14:34:01.082233048 +0000 UTC m=+782.602599322" Feb 01 14:34:06 crc kubenswrapper[4820]: I0201 14:34:06.091645 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" event={"ID":"e8f0fec3-947b-4599-8fc7-e588a982471e","Type":"ContainerStarted","Data":"a56dd9cc6296b5117380017de77bf4a870bebbf462ee1017f7635b417d0ee48e"} Feb 01 14:34:06 crc kubenswrapper[4820]: I0201 14:34:06.092249 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:34:06 crc kubenswrapper[4820]: I0201 14:34:06.095094 4820 generic.go:334] "Generic (PLEG): container finished" podID="5bb6335f-34a1-4d93-b71c-74b5f62ea699" containerID="42d5b122ed82f99d22d9a852e6d12b4cfeb0d4514c2c22432a38a0a64b6c3496" exitCode=0 Feb 01 14:34:06 crc kubenswrapper[4820]: I0201 14:34:06.095122 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerDied","Data":"42d5b122ed82f99d22d9a852e6d12b4cfeb0d4514c2c22432a38a0a64b6c3496"} Feb 01 14:34:06 crc kubenswrapper[4820]: I0201 14:34:06.111564 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" podStartSLOduration=1.944165894 podStartE2EDuration="8.111547602s" podCreationTimestamp="2026-02-01 14:33:58 +0000 UTC" firstStartedPulling="2026-02-01 14:33:59.701950526 +0000 UTC m=+781.222316810" lastFinishedPulling="2026-02-01 14:34:05.869332234 +0000 UTC m=+787.389698518" observedRunningTime="2026-02-01 14:34:06.110132208 +0000 UTC m=+787.630498582" watchObservedRunningTime="2026-02-01 14:34:06.111547602 +0000 UTC m=+787.631913906" Feb 01 14:34:07 crc kubenswrapper[4820]: I0201 14:34:07.102918 4820 generic.go:334] "Generic (PLEG): container finished" podID="5bb6335f-34a1-4d93-b71c-74b5f62ea699" containerID="bd80b07d09c4b2f0374b595e597913f15d525041918a600f2a0a336c4e91fa33" exitCode=0 Feb 01 14:34:07 crc kubenswrapper[4820]: I0201 14:34:07.102981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerDied","Data":"bd80b07d09c4b2f0374b595e597913f15d525041918a600f2a0a336c4e91fa33"} Feb 01 14:34:08 crc kubenswrapper[4820]: I0201 14:34:08.112438 4820 generic.go:334] "Generic (PLEG): container finished" podID="5bb6335f-34a1-4d93-b71c-74b5f62ea699" containerID="94384e19a2ececdcaaaf600145f2428802d642b5f0113265b3888f3abec243b4" exitCode=0 Feb 01 14:34:08 crc kubenswrapper[4820]: I0201 14:34:08.112516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerDied","Data":"94384e19a2ececdcaaaf600145f2428802d642b5f0113265b3888f3abec243b4"} Feb 01 14:34:09 crc kubenswrapper[4820]: I0201 14:34:09.123540 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerStarted","Data":"736f69931ea2d83e95253f94cbc0d9944bec39febb64e21158cd12dcfbf21678"} Feb 01 14:34:09 crc kubenswrapper[4820]: I0201 14:34:09.123892 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerStarted","Data":"54a97fdd86fdd6cde186e4898d77eb4b523c1d020f4f21fd37a815663789d8d9"} Feb 01 14:34:09 crc kubenswrapper[4820]: I0201 14:34:09.123906 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerStarted","Data":"d5e5d7251e0608aea700762d7b6740d520e916a350783c3e8cd5e76fd3e08743"} Feb 01 14:34:09 crc kubenswrapper[4820]: I0201 14:34:09.123922 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:34:09 crc kubenswrapper[4820]: I0201 14:34:09.123935 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerStarted","Data":"e4985537e3a719e15aec312cb343b49cb405cd8ff938cd6fb701eba4913059fe"} Feb 01 14:34:09 crc kubenswrapper[4820]: I0201 14:34:09.123944 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerStarted","Data":"c9a5d4d94243e9c869ec2fd59a6929876565e26dd2139ddd979f95391be72277"} Feb 01 14:34:09 crc kubenswrapper[4820]: I0201 14:34:09.141909 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9l7gp" podStartSLOduration=4.222531066 podStartE2EDuration="11.141890918s" podCreationTimestamp="2026-02-01 14:33:58 +0000 UTC" firstStartedPulling="2026-02-01 14:33:58.971679297 +0000 UTC m=+780.492045571" lastFinishedPulling="2026-02-01 14:34:05.891039139 +0000 UTC m=+787.411405423" observedRunningTime="2026-02-01 14:34:09.140647869 +0000 UTC m=+790.661014153" watchObservedRunningTime="2026-02-01 14:34:09.141890918 +0000 UTC m=+790.662257202" Feb 01 14:34:10 crc kubenswrapper[4820]: I0201 14:34:10.133051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9l7gp" event={"ID":"5bb6335f-34a1-4d93-b71c-74b5f62ea699","Type":"ContainerStarted","Data":"48d9a06533e27996de8ceec1d4efe1ca3e088d7d12e098ce792dca6a5e3218f1"} Feb 01 14:34:10 crc kubenswrapper[4820]: I0201 14:34:10.439253 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gtf4x" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.211411 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.220213 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-s97j7"] Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.220972 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s97j7" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.227343 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s97j7"] Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.246238 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.246549 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.247029 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-pwqg2" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.268542 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.412825 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sblcp\" (UniqueName: \"kubernetes.io/projected/91365da6-7a3b-439a-b68e-50b08da308bc-kube-api-access-sblcp\") pod \"openstack-operator-index-s97j7\" (UID: \"91365da6-7a3b-439a-b68e-50b08da308bc\") " pod="openstack-operators/openstack-operator-index-s97j7" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.513819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sblcp\" (UniqueName: \"kubernetes.io/projected/91365da6-7a3b-439a-b68e-50b08da308bc-kube-api-access-sblcp\") pod \"openstack-operator-index-s97j7\" (UID: \"91365da6-7a3b-439a-b68e-50b08da308bc\") " pod="openstack-operators/openstack-operator-index-s97j7" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.531630 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sblcp\" (UniqueName: \"kubernetes.io/projected/91365da6-7a3b-439a-b68e-50b08da308bc-kube-api-access-sblcp\") pod \"openstack-operator-index-s97j7\" (UID: \"91365da6-7a3b-439a-b68e-50b08da308bc\") " pod="openstack-operators/openstack-operator-index-s97j7" Feb 01 14:34:14 crc kubenswrapper[4820]: I0201 14:34:14.582050 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s97j7" Feb 01 14:34:15 crc kubenswrapper[4820]: I0201 14:34:15.014278 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s97j7"] Feb 01 14:34:15 crc kubenswrapper[4820]: W0201 14:34:15.031467 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91365da6_7a3b_439a_b68e_50b08da308bc.slice/crio-2077322fc0858925639b39e6abbeb99ff7e9b0e76ca6a02b147399c052eb2ae9 WatchSource:0}: Error finding container 2077322fc0858925639b39e6abbeb99ff7e9b0e76ca6a02b147399c052eb2ae9: Status 404 returned error can't find the container with id 2077322fc0858925639b39e6abbeb99ff7e9b0e76ca6a02b147399c052eb2ae9 Feb 01 14:34:15 crc kubenswrapper[4820]: I0201 14:34:15.236059 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s97j7" event={"ID":"91365da6-7a3b-439a-b68e-50b08da308bc","Type":"ContainerStarted","Data":"2077322fc0858925639b39e6abbeb99ff7e9b0e76ca6a02b147399c052eb2ae9"} Feb 01 14:34:16 crc kubenswrapper[4820]: I0201 14:34:16.700224 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-s97j7"] Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.259465 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s97j7" event={"ID":"91365da6-7a3b-439a-b68e-50b08da308bc","Type":"ContainerStarted","Data":"f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb"} Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.259659 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-s97j7" podUID="91365da6-7a3b-439a-b68e-50b08da308bc" containerName="registry-server" containerID="cri-o://f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb" gracePeriod=2 Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.282785 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-s97j7" podStartSLOduration=2.370910475 podStartE2EDuration="4.282767549s" podCreationTimestamp="2026-02-01 14:34:13 +0000 UTC" firstStartedPulling="2026-02-01 14:34:15.035266832 +0000 UTC m=+796.555633156" lastFinishedPulling="2026-02-01 14:34:16.947123946 +0000 UTC m=+798.467490230" observedRunningTime="2026-02-01 14:34:17.276292162 +0000 UTC m=+798.796658456" watchObservedRunningTime="2026-02-01 14:34:17.282767549 +0000 UTC m=+798.803133833" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.313367 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kd6lp"] Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.314325 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.319738 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kd6lp"] Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.450900 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts4j9\" (UniqueName: \"kubernetes.io/projected/500ca2ff-1d42-4876-9838-3e11b923e72e-kube-api-access-ts4j9\") pod \"openstack-operator-index-kd6lp\" (UID: \"500ca2ff-1d42-4876-9838-3e11b923e72e\") " pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.552128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts4j9\" (UniqueName: \"kubernetes.io/projected/500ca2ff-1d42-4876-9838-3e11b923e72e-kube-api-access-ts4j9\") pod \"openstack-operator-index-kd6lp\" (UID: \"500ca2ff-1d42-4876-9838-3e11b923e72e\") " pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.570703 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts4j9\" (UniqueName: \"kubernetes.io/projected/500ca2ff-1d42-4876-9838-3e11b923e72e-kube-api-access-ts4j9\") pod \"openstack-operator-index-kd6lp\" (UID: \"500ca2ff-1d42-4876-9838-3e11b923e72e\") " pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.615405 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s97j7" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.693137 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.754461 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sblcp\" (UniqueName: \"kubernetes.io/projected/91365da6-7a3b-439a-b68e-50b08da308bc-kube-api-access-sblcp\") pod \"91365da6-7a3b-439a-b68e-50b08da308bc\" (UID: \"91365da6-7a3b-439a-b68e-50b08da308bc\") " Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.760737 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91365da6-7a3b-439a-b68e-50b08da308bc-kube-api-access-sblcp" (OuterVolumeSpecName: "kube-api-access-sblcp") pod "91365da6-7a3b-439a-b68e-50b08da308bc" (UID: "91365da6-7a3b-439a-b68e-50b08da308bc"). InnerVolumeSpecName "kube-api-access-sblcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.856139 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sblcp\" (UniqueName: \"kubernetes.io/projected/91365da6-7a3b-439a-b68e-50b08da308bc-kube-api-access-sblcp\") on node \"crc\" DevicePath \"\"" Feb 01 14:34:17 crc kubenswrapper[4820]: I0201 14:34:17.877701 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kd6lp"] Feb 01 14:34:17 crc kubenswrapper[4820]: W0201 14:34:17.884059 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod500ca2ff_1d42_4876_9838_3e11b923e72e.slice/crio-bae43e19406892e0c3e6f6f9cab298bc0df179d730b16922c31b3d2ac8d63f83 WatchSource:0}: Error finding container bae43e19406892e0c3e6f6f9cab298bc0df179d730b16922c31b3d2ac8d63f83: Status 404 returned error can't find the container with id bae43e19406892e0c3e6f6f9cab298bc0df179d730b16922c31b3d2ac8d63f83 Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.267172 4820 generic.go:334] "Generic (PLEG): container finished" podID="91365da6-7a3b-439a-b68e-50b08da308bc" containerID="f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb" exitCode=0 Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.267484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s97j7" event={"ID":"91365da6-7a3b-439a-b68e-50b08da308bc","Type":"ContainerDied","Data":"f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb"} Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.267512 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s97j7" event={"ID":"91365da6-7a3b-439a-b68e-50b08da308bc","Type":"ContainerDied","Data":"2077322fc0858925639b39e6abbeb99ff7e9b0e76ca6a02b147399c052eb2ae9"} Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.267530 4820 scope.go:117] "RemoveContainer" containerID="f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb" Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.267621 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s97j7" Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.269785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kd6lp" event={"ID":"500ca2ff-1d42-4876-9838-3e11b923e72e","Type":"ContainerStarted","Data":"4acd09aeb41c5055e3e0830699479a0110e6ca77fc9cc101b9d036ea3755123d"} Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.270560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kd6lp" event={"ID":"500ca2ff-1d42-4876-9838-3e11b923e72e","Type":"ContainerStarted","Data":"bae43e19406892e0c3e6f6f9cab298bc0df179d730b16922c31b3d2ac8d63f83"} Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.291325 4820 scope.go:117] "RemoveContainer" containerID="f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb" Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.293172 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kd6lp" podStartSLOduration=1.245643413 podStartE2EDuration="1.29313784s" podCreationTimestamp="2026-02-01 14:34:17 +0000 UTC" firstStartedPulling="2026-02-01 14:34:17.889965067 +0000 UTC m=+799.410331371" lastFinishedPulling="2026-02-01 14:34:17.937459504 +0000 UTC m=+799.457825798" observedRunningTime="2026-02-01 14:34:18.283529539 +0000 UTC m=+799.803895813" watchObservedRunningTime="2026-02-01 14:34:18.29313784 +0000 UTC m=+799.813504154" Feb 01 14:34:18 crc kubenswrapper[4820]: E0201 14:34:18.293760 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb\": container with ID starting with f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb not found: ID does not exist" containerID="f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb" Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.293809 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb"} err="failed to get container status \"f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb\": rpc error: code = NotFound desc = could not find container \"f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb\": container with ID starting with f2df1dab4d8fe14c2fd341baa61af3110126ca681868cd54d95e4c4283a0a6cb not found: ID does not exist" Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.303324 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-s97j7"] Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.311596 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-s97j7"] Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.862062 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9l7gp" Feb 01 14:34:18 crc kubenswrapper[4820]: I0201 14:34:18.962759 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-dg6dc" Feb 01 14:34:19 crc kubenswrapper[4820]: I0201 14:34:19.224153 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91365da6-7a3b-439a-b68e-50b08da308bc" path="/var/lib/kubelet/pods/91365da6-7a3b-439a-b68e-50b08da308bc/volumes" Feb 01 14:34:19 crc kubenswrapper[4820]: I0201 14:34:19.242716 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:34:19 crc kubenswrapper[4820]: I0201 14:34:19.242796 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:34:19 crc kubenswrapper[4820]: I0201 14:34:19.242861 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:34:19 crc kubenswrapper[4820]: I0201 14:34:19.243794 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f42c80990978c64d79b08815b144c253d004fd2bc9ddecf8ee4b80f9fef30c27"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:34:19 crc kubenswrapper[4820]: I0201 14:34:19.243928 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://f42c80990978c64d79b08815b144c253d004fd2bc9ddecf8ee4b80f9fef30c27" gracePeriod=600 Feb 01 14:34:19 crc kubenswrapper[4820]: I0201 14:34:19.472316 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k2kpp" Feb 01 14:34:20 crc kubenswrapper[4820]: I0201 14:34:20.286989 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="f42c80990978c64d79b08815b144c253d004fd2bc9ddecf8ee4b80f9fef30c27" exitCode=0 Feb 01 14:34:20 crc kubenswrapper[4820]: I0201 14:34:20.287065 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"f42c80990978c64d79b08815b144c253d004fd2bc9ddecf8ee4b80f9fef30c27"} Feb 01 14:34:20 crc kubenswrapper[4820]: I0201 14:34:20.287369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"3b1868d25809e0a94d683b5755d093ee9d0de6decf94341a6eb437233a52b5e1"} Feb 01 14:34:20 crc kubenswrapper[4820]: I0201 14:34:20.287391 4820 scope.go:117] "RemoveContainer" containerID="467d256019f163fae0b6c21f79014e504c0f178df2ba0ae24f36681ac92b6dfe" Feb 01 14:34:27 crc kubenswrapper[4820]: I0201 14:34:27.693256 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:27 crc kubenswrapper[4820]: I0201 14:34:27.693848 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:27 crc kubenswrapper[4820]: I0201 14:34:27.719031 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:28 crc kubenswrapper[4820]: I0201 14:34:28.363075 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kd6lp" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.598602 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65"] Feb 01 14:34:29 crc kubenswrapper[4820]: E0201 14:34:29.598978 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91365da6-7a3b-439a-b68e-50b08da308bc" containerName="registry-server" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.598995 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="91365da6-7a3b-439a-b68e-50b08da308bc" containerName="registry-server" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.599097 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="91365da6-7a3b-439a-b68e-50b08da308bc" containerName="registry-server" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.599942 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.605887 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65"] Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.609182 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-n6m5j" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.715933 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-util\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.716007 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-bundle\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.716149 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz6dd\" (UniqueName: \"kubernetes.io/projected/59b8afbb-2970-4526-be1a-4a52966f2387-kube-api-access-hz6dd\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.817472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-util\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.817515 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-bundle\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.817581 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz6dd\" (UniqueName: \"kubernetes.io/projected/59b8afbb-2970-4526-be1a-4a52966f2387-kube-api-access-hz6dd\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.818341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-util\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.818532 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-bundle\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.842304 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz6dd\" (UniqueName: \"kubernetes.io/projected/59b8afbb-2970-4526-be1a-4a52966f2387-kube-api-access-hz6dd\") pod \"5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:29 crc kubenswrapper[4820]: I0201 14:34:29.925058 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:30 crc kubenswrapper[4820]: I0201 14:34:30.129003 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65"] Feb 01 14:34:30 crc kubenswrapper[4820]: W0201 14:34:30.137139 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59b8afbb_2970_4526_be1a_4a52966f2387.slice/crio-c3826fc78d0200677f6b4d1f7b8323738e0050b95f72f4702babae46699adaf1 WatchSource:0}: Error finding container c3826fc78d0200677f6b4d1f7b8323738e0050b95f72f4702babae46699adaf1: Status 404 returned error can't find the container with id c3826fc78d0200677f6b4d1f7b8323738e0050b95f72f4702babae46699adaf1 Feb 01 14:34:30 crc kubenswrapper[4820]: I0201 14:34:30.353864 4820 generic.go:334] "Generic (PLEG): container finished" podID="59b8afbb-2970-4526-be1a-4a52966f2387" containerID="57bedd6032e670f97877c10ad60824393fbc8cb39309f3417d8d88433255c6ba" exitCode=0 Feb 01 14:34:30 crc kubenswrapper[4820]: I0201 14:34:30.353997 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" event={"ID":"59b8afbb-2970-4526-be1a-4a52966f2387","Type":"ContainerDied","Data":"57bedd6032e670f97877c10ad60824393fbc8cb39309f3417d8d88433255c6ba"} Feb 01 14:34:30 crc kubenswrapper[4820]: I0201 14:34:30.354324 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" event={"ID":"59b8afbb-2970-4526-be1a-4a52966f2387","Type":"ContainerStarted","Data":"c3826fc78d0200677f6b4d1f7b8323738e0050b95f72f4702babae46699adaf1"} Feb 01 14:34:31 crc kubenswrapper[4820]: I0201 14:34:31.366221 4820 generic.go:334] "Generic (PLEG): container finished" podID="59b8afbb-2970-4526-be1a-4a52966f2387" containerID="e6150a807b858e7cd69e17c8f946f45261d5ed6ec73220b3f30adb91eee44a90" exitCode=0 Feb 01 14:34:31 crc kubenswrapper[4820]: I0201 14:34:31.366258 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" event={"ID":"59b8afbb-2970-4526-be1a-4a52966f2387","Type":"ContainerDied","Data":"e6150a807b858e7cd69e17c8f946f45261d5ed6ec73220b3f30adb91eee44a90"} Feb 01 14:34:32 crc kubenswrapper[4820]: I0201 14:34:32.373267 4820 generic.go:334] "Generic (PLEG): container finished" podID="59b8afbb-2970-4526-be1a-4a52966f2387" containerID="40036c13d1a2a027a15cf94e83e7ef7a1e9e9a9027e1fec4849897a0a13b3a0b" exitCode=0 Feb 01 14:34:32 crc kubenswrapper[4820]: I0201 14:34:32.373378 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" event={"ID":"59b8afbb-2970-4526-be1a-4a52966f2387","Type":"ContainerDied","Data":"40036c13d1a2a027a15cf94e83e7ef7a1e9e9a9027e1fec4849897a0a13b3a0b"} Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.704506 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.771305 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz6dd\" (UniqueName: \"kubernetes.io/projected/59b8afbb-2970-4526-be1a-4a52966f2387-kube-api-access-hz6dd\") pod \"59b8afbb-2970-4526-be1a-4a52966f2387\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.771433 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-bundle\") pod \"59b8afbb-2970-4526-be1a-4a52966f2387\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.772189 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-bundle" (OuterVolumeSpecName: "bundle") pod "59b8afbb-2970-4526-be1a-4a52966f2387" (UID: "59b8afbb-2970-4526-be1a-4a52966f2387"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.776771 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b8afbb-2970-4526-be1a-4a52966f2387-kube-api-access-hz6dd" (OuterVolumeSpecName: "kube-api-access-hz6dd") pod "59b8afbb-2970-4526-be1a-4a52966f2387" (UID: "59b8afbb-2970-4526-be1a-4a52966f2387"). InnerVolumeSpecName "kube-api-access-hz6dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.872956 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-util\") pod \"59b8afbb-2970-4526-be1a-4a52966f2387\" (UID: \"59b8afbb-2970-4526-be1a-4a52966f2387\") " Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.873376 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz6dd\" (UniqueName: \"kubernetes.io/projected/59b8afbb-2970-4526-be1a-4a52966f2387-kube-api-access-hz6dd\") on node \"crc\" DevicePath \"\"" Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.873413 4820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.887452 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-util" (OuterVolumeSpecName: "util") pod "59b8afbb-2970-4526-be1a-4a52966f2387" (UID: "59b8afbb-2970-4526-be1a-4a52966f2387"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:34:33 crc kubenswrapper[4820]: I0201 14:34:33.974032 4820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59b8afbb-2970-4526-be1a-4a52966f2387-util\") on node \"crc\" DevicePath \"\"" Feb 01 14:34:34 crc kubenswrapper[4820]: I0201 14:34:34.390152 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" event={"ID":"59b8afbb-2970-4526-be1a-4a52966f2387","Type":"ContainerDied","Data":"c3826fc78d0200677f6b4d1f7b8323738e0050b95f72f4702babae46699adaf1"} Feb 01 14:34:34 crc kubenswrapper[4820]: I0201 14:34:34.390207 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3826fc78d0200677f6b4d1f7b8323738e0050b95f72f4702babae46699adaf1" Feb 01 14:34:34 crc kubenswrapper[4820]: I0201 14:34:34.390219 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.600196 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-57d5594849-c9q67"] Feb 01 14:34:36 crc kubenswrapper[4820]: E0201 14:34:36.600831 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b8afbb-2970-4526-be1a-4a52966f2387" containerName="util" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.600847 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b8afbb-2970-4526-be1a-4a52966f2387" containerName="util" Feb 01 14:34:36 crc kubenswrapper[4820]: E0201 14:34:36.600895 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b8afbb-2970-4526-be1a-4a52966f2387" containerName="pull" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.600904 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b8afbb-2970-4526-be1a-4a52966f2387" containerName="pull" Feb 01 14:34:36 crc kubenswrapper[4820]: E0201 14:34:36.600914 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b8afbb-2970-4526-be1a-4a52966f2387" containerName="extract" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.600921 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b8afbb-2970-4526-be1a-4a52966f2387" containerName="extract" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.601054 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b8afbb-2970-4526-be1a-4a52966f2387" containerName="extract" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.601553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.609508 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-w8976" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.633880 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-57d5594849-c9q67"] Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.712375 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkzf6\" (UniqueName: \"kubernetes.io/projected/cd9d8871-b20a-4c21-b59e-3a7610021960-kube-api-access-mkzf6\") pod \"openstack-operator-controller-init-57d5594849-c9q67\" (UID: \"cd9d8871-b20a-4c21-b59e-3a7610021960\") " pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.813281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkzf6\" (UniqueName: \"kubernetes.io/projected/cd9d8871-b20a-4c21-b59e-3a7610021960-kube-api-access-mkzf6\") pod \"openstack-operator-controller-init-57d5594849-c9q67\" (UID: \"cd9d8871-b20a-4c21-b59e-3a7610021960\") " pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.840268 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkzf6\" (UniqueName: \"kubernetes.io/projected/cd9d8871-b20a-4c21-b59e-3a7610021960-kube-api-access-mkzf6\") pod \"openstack-operator-controller-init-57d5594849-c9q67\" (UID: \"cd9d8871-b20a-4c21-b59e-3a7610021960\") " pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" Feb 01 14:34:36 crc kubenswrapper[4820]: I0201 14:34:36.917569 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" Feb 01 14:34:37 crc kubenswrapper[4820]: I0201 14:34:37.333610 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-57d5594849-c9q67"] Feb 01 14:34:37 crc kubenswrapper[4820]: I0201 14:34:37.411857 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" event={"ID":"cd9d8871-b20a-4c21-b59e-3a7610021960","Type":"ContainerStarted","Data":"3f87414f5580adbbdf8f8b6406edfbcd710be8a7105061ef442eccd61d056cb2"} Feb 01 14:34:41 crc kubenswrapper[4820]: I0201 14:34:41.449908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" event={"ID":"cd9d8871-b20a-4c21-b59e-3a7610021960","Type":"ContainerStarted","Data":"e09dfac3036a24c45dc8d6f24b95c12290492ffeb456e80e1b5e0689cc168275"} Feb 01 14:34:41 crc kubenswrapper[4820]: I0201 14:34:41.450983 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" Feb 01 14:34:41 crc kubenswrapper[4820]: I0201 14:34:41.494634 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" podStartSLOduration=1.731214647 podStartE2EDuration="5.49461733s" podCreationTimestamp="2026-02-01 14:34:36 +0000 UTC" firstStartedPulling="2026-02-01 14:34:37.341074529 +0000 UTC m=+818.861440813" lastFinishedPulling="2026-02-01 14:34:41.104477212 +0000 UTC m=+822.624843496" observedRunningTime="2026-02-01 14:34:41.491730191 +0000 UTC m=+823.012096475" watchObservedRunningTime="2026-02-01 14:34:41.49461733 +0000 UTC m=+823.014983614" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.509181 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-72snl"] Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.511106 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.524975 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-72snl"] Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.607141 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-catalog-content\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.607185 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggp7v\" (UniqueName: \"kubernetes.io/projected/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-kube-api-access-ggp7v\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.607319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-utilities\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.708847 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-utilities\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.708986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-catalog-content\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.709011 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggp7v\" (UniqueName: \"kubernetes.io/projected/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-kube-api-access-ggp7v\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.709838 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-utilities\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.710126 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-catalog-content\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.740134 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggp7v\" (UniqueName: \"kubernetes.io/projected/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-kube-api-access-ggp7v\") pod \"redhat-marketplace-72snl\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:42 crc kubenswrapper[4820]: I0201 14:34:42.832683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:43 crc kubenswrapper[4820]: I0201 14:34:43.228621 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-72snl"] Feb 01 14:34:43 crc kubenswrapper[4820]: W0201 14:34:43.237803 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6b10c2b_69ce_4d1f_89c8_46b51e5ae974.slice/crio-17231610a59341fe98a34e3f1f9e34b0e173ff8b2cc383f0baefa9f7329673ae WatchSource:0}: Error finding container 17231610a59341fe98a34e3f1f9e34b0e173ff8b2cc383f0baefa9f7329673ae: Status 404 returned error can't find the container with id 17231610a59341fe98a34e3f1f9e34b0e173ff8b2cc383f0baefa9f7329673ae Feb 01 14:34:43 crc kubenswrapper[4820]: I0201 14:34:43.461139 4820 generic.go:334] "Generic (PLEG): container finished" podID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerID="6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c" exitCode=0 Feb 01 14:34:43 crc kubenswrapper[4820]: I0201 14:34:43.461191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72snl" event={"ID":"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974","Type":"ContainerDied","Data":"6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c"} Feb 01 14:34:43 crc kubenswrapper[4820]: I0201 14:34:43.461468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72snl" event={"ID":"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974","Type":"ContainerStarted","Data":"17231610a59341fe98a34e3f1f9e34b0e173ff8b2cc383f0baefa9f7329673ae"} Feb 01 14:34:44 crc kubenswrapper[4820]: I0201 14:34:44.468351 4820 generic.go:334] "Generic (PLEG): container finished" podID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerID="b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983" exitCode=0 Feb 01 14:34:44 crc kubenswrapper[4820]: I0201 14:34:44.468401 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72snl" event={"ID":"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974","Type":"ContainerDied","Data":"b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983"} Feb 01 14:34:45 crc kubenswrapper[4820]: I0201 14:34:45.475783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72snl" event={"ID":"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974","Type":"ContainerStarted","Data":"741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da"} Feb 01 14:34:45 crc kubenswrapper[4820]: I0201 14:34:45.494637 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-72snl" podStartSLOduration=2.113059692 podStartE2EDuration="3.494617865s" podCreationTimestamp="2026-02-01 14:34:42 +0000 UTC" firstStartedPulling="2026-02-01 14:34:43.462941148 +0000 UTC m=+824.983307432" lastFinishedPulling="2026-02-01 14:34:44.844499321 +0000 UTC m=+826.364865605" observedRunningTime="2026-02-01 14:34:45.494055552 +0000 UTC m=+827.014421836" watchObservedRunningTime="2026-02-01 14:34:45.494617865 +0000 UTC m=+827.014984149" Feb 01 14:34:46 crc kubenswrapper[4820]: I0201 14:34:46.920201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-57d5594849-c9q67" Feb 01 14:34:52 crc kubenswrapper[4820]: I0201 14:34:52.832934 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:52 crc kubenswrapper[4820]: I0201 14:34:52.833464 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:52 crc kubenswrapper[4820]: I0201 14:34:52.878553 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:53 crc kubenswrapper[4820]: I0201 14:34:53.569197 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:53 crc kubenswrapper[4820]: I0201 14:34:53.607188 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-72snl"] Feb 01 14:34:55 crc kubenswrapper[4820]: I0201 14:34:55.537449 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-72snl" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerName="registry-server" containerID="cri-o://741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da" gracePeriod=2 Feb 01 14:34:55 crc kubenswrapper[4820]: I0201 14:34:55.918524 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.073858 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggp7v\" (UniqueName: \"kubernetes.io/projected/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-kube-api-access-ggp7v\") pod \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.073995 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-catalog-content\") pod \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.074029 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-utilities\") pod \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\" (UID: \"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974\") " Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.075048 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-utilities" (OuterVolumeSpecName: "utilities") pod "e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" (UID: "e6b10c2b-69ce-4d1f-89c8-46b51e5ae974"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.081215 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-kube-api-access-ggp7v" (OuterVolumeSpecName: "kube-api-access-ggp7v") pod "e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" (UID: "e6b10c2b-69ce-4d1f-89c8-46b51e5ae974"). InnerVolumeSpecName "kube-api-access-ggp7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.101532 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" (UID: "e6b10c2b-69ce-4d1f-89c8-46b51e5ae974"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.175655 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.175687 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.175699 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggp7v\" (UniqueName: \"kubernetes.io/projected/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974-kube-api-access-ggp7v\") on node \"crc\" DevicePath \"\"" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.545850 4820 generic.go:334] "Generic (PLEG): container finished" podID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerID="741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da" exitCode=0 Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.545926 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72snl" event={"ID":"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974","Type":"ContainerDied","Data":"741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da"} Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.545975 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72snl" event={"ID":"e6b10c2b-69ce-4d1f-89c8-46b51e5ae974","Type":"ContainerDied","Data":"17231610a59341fe98a34e3f1f9e34b0e173ff8b2cc383f0baefa9f7329673ae"} Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.545974 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72snl" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.545995 4820 scope.go:117] "RemoveContainer" containerID="741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.566904 4820 scope.go:117] "RemoveContainer" containerID="b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.583051 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-72snl"] Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.586391 4820 scope.go:117] "RemoveContainer" containerID="6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.588408 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-72snl"] Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.604886 4820 scope.go:117] "RemoveContainer" containerID="741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da" Feb 01 14:34:56 crc kubenswrapper[4820]: E0201 14:34:56.605356 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da\": container with ID starting with 741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da not found: ID does not exist" containerID="741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.605462 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da"} err="failed to get container status \"741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da\": rpc error: code = NotFound desc = could not find container \"741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da\": container with ID starting with 741a21fed0288830963f8002c738ee9fa9991e073315049043dee334e6efa4da not found: ID does not exist" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.605541 4820 scope.go:117] "RemoveContainer" containerID="b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983" Feb 01 14:34:56 crc kubenswrapper[4820]: E0201 14:34:56.605976 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983\": container with ID starting with b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983 not found: ID does not exist" containerID="b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.606035 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983"} err="failed to get container status \"b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983\": rpc error: code = NotFound desc = could not find container \"b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983\": container with ID starting with b3b199b38b3f89e5ca09eb03a00e36dcb580be672c1ce2d3b1373154bf385983 not found: ID does not exist" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.606073 4820 scope.go:117] "RemoveContainer" containerID="6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c" Feb 01 14:34:56 crc kubenswrapper[4820]: E0201 14:34:56.606430 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c\": container with ID starting with 6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c not found: ID does not exist" containerID="6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c" Feb 01 14:34:56 crc kubenswrapper[4820]: I0201 14:34:56.606451 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c"} err="failed to get container status \"6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c\": rpc error: code = NotFound desc = could not find container \"6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c\": container with ID starting with 6563b9762705e31202dbba6441855cb357eb5c4c8d1843f95e668dabd39cd19c not found: ID does not exist" Feb 01 14:34:57 crc kubenswrapper[4820]: I0201 14:34:57.206278 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" path="/var/lib/kubelet/pods/e6b10c2b-69ce-4d1f-89c8-46b51e5ae974/volumes" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.786109 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh"] Feb 01 14:35:05 crc kubenswrapper[4820]: E0201 14:35:05.786838 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerName="extract-content" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.786850 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerName="extract-content" Feb 01 14:35:05 crc kubenswrapper[4820]: E0201 14:35:05.786862 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerName="registry-server" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.786868 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerName="registry-server" Feb 01 14:35:05 crc kubenswrapper[4820]: E0201 14:35:05.786898 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerName="extract-utilities" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.786904 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerName="extract-utilities" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.787005 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b10c2b-69ce-4d1f-89c8-46b51e5ae974" containerName="registry-server" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.787411 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.791369 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8kh5r" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.792243 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.792984 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.794725 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-tscw9" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.804965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzz8k\" (UniqueName: \"kubernetes.io/projected/52e7e427-3656-4c7a-afbb-a7cbd89d9318-kube-api-access-gzz8k\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wnzxh\" (UID: \"52e7e427-3656-4c7a-afbb-a7cbd89d9318\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.805057 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44c4m\" (UniqueName: \"kubernetes.io/projected/4a8d372d-6d1c-4366-983c-947ccd5403e6-kube-api-access-44c4m\") pod \"cinder-operator-controller-manager-8d874c8fc-9fz5b\" (UID: \"4a8d372d-6d1c-4366-983c-947ccd5403e6\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.805574 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.809502 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.810375 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.812770 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tk88j" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.820354 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.832891 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.844255 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.845039 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.846346 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-jp22s" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.848443 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.849423 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.853671 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rsh2k" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.857168 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.859965 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.870301 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.871076 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.872809 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tct7j" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.881472 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.888125 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.889107 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.891482 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.891839 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n296t" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.906924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgg7f\" (UniqueName: \"kubernetes.io/projected/752637ba-3ba3-4b87-89f4-d2a079743b70-kube-api-access-tgg7f\") pod \"heat-operator-controller-manager-69d6db494d-h9f62\" (UID: \"752637ba-3ba3-4b87-89f4-d2a079743b70\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.906981 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4szbx\" (UniqueName: \"kubernetes.io/projected/670d53e4-21ca-4ec7-b72b-0e938b2d85e8-kube-api-access-4szbx\") pod \"horizon-operator-controller-manager-5fb775575f-fqfvc\" (UID: \"670d53e4-21ca-4ec7-b72b-0e938b2d85e8\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.907001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47lfv\" (UniqueName: \"kubernetes.io/projected/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-kube-api-access-47lfv\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.907019 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmfbs\" (UniqueName: \"kubernetes.io/projected/90739996-10ed-4ad6-b704-012fafc3505f-kube-api-access-hmfbs\") pod \"designate-operator-controller-manager-6d9697b7f4-rf496\" (UID: \"90739996-10ed-4ad6-b704-012fafc3505f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.907044 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.907081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzz8k\" (UniqueName: \"kubernetes.io/projected/52e7e427-3656-4c7a-afbb-a7cbd89d9318-kube-api-access-gzz8k\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wnzxh\" (UID: \"52e7e427-3656-4c7a-afbb-a7cbd89d9318\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.907106 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44c4m\" (UniqueName: \"kubernetes.io/projected/4a8d372d-6d1c-4366-983c-947ccd5403e6-kube-api-access-44c4m\") pod \"cinder-operator-controller-manager-8d874c8fc-9fz5b\" (UID: \"4a8d372d-6d1c-4366-983c-947ccd5403e6\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.907123 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzf2v\" (UniqueName: \"kubernetes.io/projected/92bf88c4-d908-4d13-be4e-72936688113c-kube-api-access-mzf2v\") pod \"glance-operator-controller-manager-8886f4c47-slbdv\" (UID: \"92bf88c4-d908-4d13-be4e-72936688113c\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.930066 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzz8k\" (UniqueName: \"kubernetes.io/projected/52e7e427-3656-4c7a-afbb-a7cbd89d9318-kube-api-access-gzz8k\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wnzxh\" (UID: \"52e7e427-3656-4c7a-afbb-a7cbd89d9318\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.936906 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.937361 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44c4m\" (UniqueName: \"kubernetes.io/projected/4a8d372d-6d1c-4366-983c-947ccd5403e6-kube-api-access-44c4m\") pod \"cinder-operator-controller-manager-8d874c8fc-9fz5b\" (UID: \"4a8d372d-6d1c-4366-983c-947ccd5403e6\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.944188 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.946111 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.951935 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-r7l5f" Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.975238 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.988143 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd"] Feb 01 14:35:05 crc kubenswrapper[4820]: I0201 14:35:05.988389 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.004177 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.006465 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fs54m" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.007059 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-n7xf2" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.007233 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.007524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.008059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmfx\" (UniqueName: \"kubernetes.io/projected/abe5d51e-a818-4e88-93d9-4f53a8d368b4-kube-api-access-lrmfx\") pod \"keystone-operator-controller-manager-84f48565d4-x5p2q\" (UID: \"abe5d51e-a818-4e88-93d9-4f53a8d368b4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.008135 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.008180 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert podName:09ceaf4b-4a63-4ba6-9b77-ac550850ffe4 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:06.508165058 +0000 UTC m=+848.028531342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert") pod "infra-operator-controller-manager-79955696d6-tbjvq" (UID: "09ceaf4b-4a63-4ba6-9b77-ac550850ffe4") : secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.008262 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzf2v\" (UniqueName: \"kubernetes.io/projected/92bf88c4-d908-4d13-be4e-72936688113c-kube-api-access-mzf2v\") pod \"glance-operator-controller-manager-8886f4c47-slbdv\" (UID: \"92bf88c4-d908-4d13-be4e-72936688113c\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.008331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxh5z\" (UniqueName: \"kubernetes.io/projected/ccd8a397-de84-44f7-ae90-f0e7a51f2d80-kube-api-access-nxh5z\") pod \"ironic-operator-controller-manager-5f4b8bd54d-z9s78\" (UID: \"ccd8a397-de84-44f7-ae90-f0e7a51f2d80\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.008357 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgg7f\" (UniqueName: \"kubernetes.io/projected/752637ba-3ba3-4b87-89f4-d2a079743b70-kube-api-access-tgg7f\") pod \"heat-operator-controller-manager-69d6db494d-h9f62\" (UID: \"752637ba-3ba3-4b87-89f4-d2a079743b70\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.008391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66lw\" (UniqueName: \"kubernetes.io/projected/e5493c34-468b-4636-b236-9b5cb6e95de1-kube-api-access-l66lw\") pod \"manila-operator-controller-manager-d85699b78-lhgzd\" (UID: \"e5493c34-468b-4636-b236-9b5cb6e95de1\") " pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.008424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4szbx\" (UniqueName: \"kubernetes.io/projected/670d53e4-21ca-4ec7-b72b-0e938b2d85e8-kube-api-access-4szbx\") pod \"horizon-operator-controller-manager-5fb775575f-fqfvc\" (UID: \"670d53e4-21ca-4ec7-b72b-0e938b2d85e8\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.008450 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47lfv\" (UniqueName: \"kubernetes.io/projected/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-kube-api-access-47lfv\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.008477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmfbs\" (UniqueName: \"kubernetes.io/projected/90739996-10ed-4ad6-b704-012fafc3505f-kube-api-access-hmfbs\") pod \"designate-operator-controller-manager-6d9697b7f4-rf496\" (UID: \"90739996-10ed-4ad6-b704-012fafc3505f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.025243 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.036278 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzf2v\" (UniqueName: \"kubernetes.io/projected/92bf88c4-d908-4d13-be4e-72936688113c-kube-api-access-mzf2v\") pod \"glance-operator-controller-manager-8886f4c47-slbdv\" (UID: \"92bf88c4-d908-4d13-be4e-72936688113c\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.036682 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47lfv\" (UniqueName: \"kubernetes.io/projected/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-kube-api-access-47lfv\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.041320 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgg7f\" (UniqueName: \"kubernetes.io/projected/752637ba-3ba3-4b87-89f4-d2a079743b70-kube-api-access-tgg7f\") pod \"heat-operator-controller-manager-69d6db494d-h9f62\" (UID: \"752637ba-3ba3-4b87-89f4-d2a079743b70\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.045793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4szbx\" (UniqueName: \"kubernetes.io/projected/670d53e4-21ca-4ec7-b72b-0e938b2d85e8-kube-api-access-4szbx\") pod \"horizon-operator-controller-manager-5fb775575f-fqfvc\" (UID: \"670d53e4-21ca-4ec7-b72b-0e938b2d85e8\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.048092 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.056590 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmfbs\" (UniqueName: \"kubernetes.io/projected/90739996-10ed-4ad6-b704-012fafc3505f-kube-api-access-hmfbs\") pod \"designate-operator-controller-manager-6d9697b7f4-rf496\" (UID: \"90739996-10ed-4ad6-b704-012fafc3505f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.089839 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.091114 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.097663 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qgbsk" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.101029 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.102430 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.105088 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.105188 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r79l7" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.109063 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.110326 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrmfx\" (UniqueName: \"kubernetes.io/projected/abe5d51e-a818-4e88-93d9-4f53a8d368b4-kube-api-access-lrmfx\") pod \"keystone-operator-controller-manager-84f48565d4-x5p2q\" (UID: \"abe5d51e-a818-4e88-93d9-4f53a8d368b4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.110358 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbhh\" (UniqueName: \"kubernetes.io/projected/7524b653-9c87-474e-b819-ebdd2864815c-kube-api-access-2pbhh\") pod \"mariadb-operator-controller-manager-67bf948998-h5tlb\" (UID: \"7524b653-9c87-474e-b819-ebdd2864815c\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.110402 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxh5z\" (UniqueName: \"kubernetes.io/projected/ccd8a397-de84-44f7-ae90-f0e7a51f2d80-kube-api-access-nxh5z\") pod \"ironic-operator-controller-manager-5f4b8bd54d-z9s78\" (UID: \"ccd8a397-de84-44f7-ae90-f0e7a51f2d80\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.110460 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66lw\" (UniqueName: \"kubernetes.io/projected/e5493c34-468b-4636-b236-9b5cb6e95de1-kube-api-access-l66lw\") pod \"manila-operator-controller-manager-d85699b78-lhgzd\" (UID: \"e5493c34-468b-4636-b236-9b5cb6e95de1\") " pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.110490 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjk5r\" (UniqueName: \"kubernetes.io/projected/39a1a56a-4224-411d-95d1-49cf679d773e-kube-api-access-kjk5r\") pod \"neutron-operator-controller-manager-585dbc889-54ttw\" (UID: \"39a1a56a-4224-411d-95d1-49cf679d773e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.117031 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.126406 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.130605 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrmfx\" (UniqueName: \"kubernetes.io/projected/abe5d51e-a818-4e88-93d9-4f53a8d368b4-kube-api-access-lrmfx\") pod \"keystone-operator-controller-manager-84f48565d4-x5p2q\" (UID: \"abe5d51e-a818-4e88-93d9-4f53a8d368b4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.131994 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66lw\" (UniqueName: \"kubernetes.io/projected/e5493c34-468b-4636-b236-9b5cb6e95de1-kube-api-access-l66lw\") pod \"manila-operator-controller-manager-d85699b78-lhgzd\" (UID: \"e5493c34-468b-4636-b236-9b5cb6e95de1\") " pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.134268 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxh5z\" (UniqueName: \"kubernetes.io/projected/ccd8a397-de84-44f7-ae90-f0e7a51f2d80-kube-api-access-nxh5z\") pod \"ironic-operator-controller-manager-5f4b8bd54d-z9s78\" (UID: \"ccd8a397-de84-44f7-ae90-f0e7a51f2d80\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.135986 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.137028 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.139908 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ckjsg" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.144273 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.145174 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.146768 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ctc5m" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.151799 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.152401 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.162494 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.168521 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.169431 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.172343 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9n6rs" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.177510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.179734 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mljht"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.181057 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.184310 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pwhxl" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.185114 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.192916 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mljht"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.200261 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.203331 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.204222 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.206122 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-s8nxg" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.213651 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.214166 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.215096 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjk5r\" (UniqueName: \"kubernetes.io/projected/39a1a56a-4224-411d-95d1-49cf679d773e-kube-api-access-kjk5r\") pod \"neutron-operator-controller-manager-585dbc889-54ttw\" (UID: \"39a1a56a-4224-411d-95d1-49cf679d773e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.215120 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvbt\" (UniqueName: \"kubernetes.io/projected/7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12-kube-api-access-tmvbt\") pod \"octavia-operator-controller-manager-6687f8d877-6lqdn\" (UID: \"7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.215245 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls2qv\" (UniqueName: \"kubernetes.io/projected/a56bcd98-4612-4d2a-b172-78d775c10b6a-kube-api-access-ls2qv\") pod \"nova-operator-controller-manager-55bff696bd-vn95k\" (UID: \"a56bcd98-4612-4d2a-b172-78d775c10b6a\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.215375 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xpb\" (UniqueName: \"kubernetes.io/projected/07b45e70-e382-4778-8d7e-f360ab63dcf9-kube-api-access-45xpb\") pod \"ovn-operator-controller-manager-788c46999f-mljht\" (UID: \"07b45e70-e382-4778-8d7e-f360ab63dcf9\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.215421 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.215456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwtsp\" (UniqueName: \"kubernetes.io/projected/597a3429-85c5-4e47-983e-c77f2ccc22d3-kube-api-access-wwtsp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.215508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhx9\" (UniqueName: \"kubernetes.io/projected/d4bd9854-7d67-4698-a00a-61ea442cc25b-kube-api-access-jmhx9\") pod \"placement-operator-controller-manager-5b964cf4cd-g7844\" (UID: \"d4bd9854-7d67-4698-a00a-61ea442cc25b\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.215526 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbhh\" (UniqueName: \"kubernetes.io/projected/7524b653-9c87-474e-b819-ebdd2864815c-kube-api-access-2pbhh\") pod \"mariadb-operator-controller-manager-67bf948998-h5tlb\" (UID: \"7524b653-9c87-474e-b819-ebdd2864815c\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.227444 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.228159 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.232646 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8l2dk" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.235780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbhh\" (UniqueName: \"kubernetes.io/projected/7524b653-9c87-474e-b819-ebdd2864815c-kube-api-access-2pbhh\") pod \"mariadb-operator-controller-manager-67bf948998-h5tlb\" (UID: \"7524b653-9c87-474e-b819-ebdd2864815c\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.242281 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.243309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjk5r\" (UniqueName: \"kubernetes.io/projected/39a1a56a-4224-411d-95d1-49cf679d773e-kube-api-access-kjk5r\") pod \"neutron-operator-controller-manager-585dbc889-54ttw\" (UID: \"39a1a56a-4224-411d-95d1-49cf679d773e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.279467 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.282169 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.283036 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.287347 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vssz6" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.289379 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.320807 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.320846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwtsp\" (UniqueName: \"kubernetes.io/projected/597a3429-85c5-4e47-983e-c77f2ccc22d3-kube-api-access-wwtsp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.320889 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmhx9\" (UniqueName: \"kubernetes.io/projected/d4bd9854-7d67-4698-a00a-61ea442cc25b-kube-api-access-jmhx9\") pod \"placement-operator-controller-manager-5b964cf4cd-g7844\" (UID: \"d4bd9854-7d67-4698-a00a-61ea442cc25b\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.320958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvbt\" (UniqueName: \"kubernetes.io/projected/7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12-kube-api-access-tmvbt\") pod \"octavia-operator-controller-manager-6687f8d877-6lqdn\" (UID: \"7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.320982 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls2qv\" (UniqueName: \"kubernetes.io/projected/a56bcd98-4612-4d2a-b172-78d775c10b6a-kube-api-access-ls2qv\") pod \"nova-operator-controller-manager-55bff696bd-vn95k\" (UID: \"a56bcd98-4612-4d2a-b172-78d775c10b6a\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.321001 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xpb\" (UniqueName: \"kubernetes.io/projected/07b45e70-e382-4778-8d7e-f360ab63dcf9-kube-api-access-45xpb\") pod \"ovn-operator-controller-manager-788c46999f-mljht\" (UID: \"07b45e70-e382-4778-8d7e-f360ab63dcf9\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.321727 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.321853 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert podName:597a3429-85c5-4e47-983e-c77f2ccc22d3 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:06.821830415 +0000 UTC m=+848.342196699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" (UID: "597a3429-85c5-4e47-983e-c77f2ccc22d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.329007 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.342903 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmhx9\" (UniqueName: \"kubernetes.io/projected/d4bd9854-7d67-4698-a00a-61ea442cc25b-kube-api-access-jmhx9\") pod \"placement-operator-controller-manager-5b964cf4cd-g7844\" (UID: \"d4bd9854-7d67-4698-a00a-61ea442cc25b\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.344336 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvbt\" (UniqueName: \"kubernetes.io/projected/7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12-kube-api-access-tmvbt\") pod \"octavia-operator-controller-manager-6687f8d877-6lqdn\" (UID: \"7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.347029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwtsp\" (UniqueName: \"kubernetes.io/projected/597a3429-85c5-4e47-983e-c77f2ccc22d3-kube-api-access-wwtsp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.350723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xpb\" (UniqueName: \"kubernetes.io/projected/07b45e70-e382-4778-8d7e-f360ab63dcf9-kube-api-access-45xpb\") pod \"ovn-operator-controller-manager-788c46999f-mljht\" (UID: \"07b45e70-e382-4778-8d7e-f360ab63dcf9\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.356554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls2qv\" (UniqueName: \"kubernetes.io/projected/a56bcd98-4612-4d2a-b172-78d775c10b6a-kube-api-access-ls2qv\") pod \"nova-operator-controller-manager-55bff696bd-vn95k\" (UID: \"a56bcd98-4612-4d2a-b172-78d775c10b6a\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.389244 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.400706 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.403113 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.404135 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.408693 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zp7xr" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.410769 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.416221 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.424863 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4mbst"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.429115 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.430460 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.431858 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dv9l\" (UniqueName: \"kubernetes.io/projected/9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6-kube-api-access-4dv9l\") pod \"test-operator-controller-manager-56f8bfcd9f-5dxhd\" (UID: \"9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.431906 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp2sh\" (UniqueName: \"kubernetes.io/projected/3f3881e5-dfdc-4d18-9d61-78439a69d0cb-kube-api-access-xp2sh\") pod \"swift-operator-controller-manager-68fc8c869-nqvp8\" (UID: \"3f3881e5-dfdc-4d18-9d61-78439a69d0cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.431995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62zgb\" (UniqueName: \"kubernetes.io/projected/247ddb96-9b27-41f5-8890-8bf2dd175c36-kube-api-access-62zgb\") pod \"watcher-operator-controller-manager-564965969-4mbst\" (UID: \"247ddb96-9b27-41f5-8890-8bf2dd175c36\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.432023 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhp7\" (UniqueName: \"kubernetes.io/projected/2036e257-d3cc-48a2-bfc5-d3a262090c1e-kube-api-access-swhp7\") pod \"telemetry-operator-controller-manager-64b5b76f97-c65qc\" (UID: \"2036e257-d3cc-48a2-bfc5-d3a262090c1e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.437718 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sgxzb" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.437795 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4mbst"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.478132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.500848 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.518846 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.522469 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.524452 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.524620 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.525547 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dzg47" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.537009 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swhp7\" (UniqueName: \"kubernetes.io/projected/2036e257-d3cc-48a2-bfc5-d3a262090c1e-kube-api-access-swhp7\") pod \"telemetry-operator-controller-manager-64b5b76f97-c65qc\" (UID: \"2036e257-d3cc-48a2-bfc5-d3a262090c1e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.537068 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dv9l\" (UniqueName: \"kubernetes.io/projected/9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6-kube-api-access-4dv9l\") pod \"test-operator-controller-manager-56f8bfcd9f-5dxhd\" (UID: \"9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.537088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp2sh\" (UniqueName: \"kubernetes.io/projected/3f3881e5-dfdc-4d18-9d61-78439a69d0cb-kube-api-access-xp2sh\") pod \"swift-operator-controller-manager-68fc8c869-nqvp8\" (UID: \"3f3881e5-dfdc-4d18-9d61-78439a69d0cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.537115 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.537176 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62zgb\" (UniqueName: \"kubernetes.io/projected/247ddb96-9b27-41f5-8890-8bf2dd175c36-kube-api-access-62zgb\") pod \"watcher-operator-controller-manager-564965969-4mbst\" (UID: \"247ddb96-9b27-41f5-8890-8bf2dd175c36\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.537476 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp"] Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.538390 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.538427 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert podName:09ceaf4b-4a63-4ba6-9b77-ac550850ffe4 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:07.538414191 +0000 UTC m=+849.058780475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert") pod "infra-operator-controller-manager-79955696d6-tbjvq" (UID: "09ceaf4b-4a63-4ba6-9b77-ac550850ffe4") : secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.545315 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.555341 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp2sh\" (UniqueName: \"kubernetes.io/projected/3f3881e5-dfdc-4d18-9d61-78439a69d0cb-kube-api-access-xp2sh\") pod \"swift-operator-controller-manager-68fc8c869-nqvp8\" (UID: \"3f3881e5-dfdc-4d18-9d61-78439a69d0cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.555616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dv9l\" (UniqueName: \"kubernetes.io/projected/9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6-kube-api-access-4dv9l\") pod \"test-operator-controller-manager-56f8bfcd9f-5dxhd\" (UID: \"9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.561023 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swhp7\" (UniqueName: \"kubernetes.io/projected/2036e257-d3cc-48a2-bfc5-d3a262090c1e-kube-api-access-swhp7\") pod \"telemetry-operator-controller-manager-64b5b76f97-c65qc\" (UID: \"2036e257-d3cc-48a2-bfc5-d3a262090c1e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.563399 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.573808 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.574192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62zgb\" (UniqueName: \"kubernetes.io/projected/247ddb96-9b27-41f5-8890-8bf2dd175c36-kube-api-access-62zgb\") pod \"watcher-operator-controller-manager-564965969-4mbst\" (UID: \"247ddb96-9b27-41f5-8890-8bf2dd175c36\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.608584 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.609434 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.611757 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6k8qt" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.622043 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.624669 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.640693 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.641010 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggsq9\" (UniqueName: \"kubernetes.io/projected/c560d3b1-caa9-4125-bc88-cdcdfb7ed651-kube-api-access-ggsq9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qwxns\" (UID: \"c560d3b1-caa9-4125-bc88-cdcdfb7ed651\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.641147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59ks\" (UniqueName: \"kubernetes.io/projected/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-kube-api-access-h59ks\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.641683 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.685211 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.745656 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.745741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.745787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsq9\" (UniqueName: \"kubernetes.io/projected/c560d3b1-caa9-4125-bc88-cdcdfb7ed651-kube-api-access-ggsq9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qwxns\" (UID: \"c560d3b1-caa9-4125-bc88-cdcdfb7ed651\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.745785 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.745803 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59ks\" (UniqueName: \"kubernetes.io/projected/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-kube-api-access-h59ks\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.745854 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:07.245837395 +0000 UTC m=+848.766203679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.746082 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.746121 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:07.246106971 +0000 UTC m=+848.766473255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "metrics-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.749853 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.767597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsq9\" (UniqueName: \"kubernetes.io/projected/c560d3b1-caa9-4125-bc88-cdcdfb7ed651-kube-api-access-ggsq9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qwxns\" (UID: \"c560d3b1-caa9-4125-bc88-cdcdfb7ed651\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.771749 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.772336 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.777143 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59ks\" (UniqueName: \"kubernetes.io/projected/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-kube-api-access-h59ks\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.846428 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.846624 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: E0201 14:35:06.846666 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert podName:597a3429-85c5-4e47-983e-c77f2ccc22d3 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:07.846651906 +0000 UTC m=+849.367018190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" (UID: "597a3429-85c5-4e47-983e-c77f2ccc22d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.856668 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496"] Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.864777 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv"] Feb 01 14:35:06 crc kubenswrapper[4820]: W0201 14:35:06.870553 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90739996_10ed_4ad6_b704_012fafc3505f.slice/crio-5d3db3aeae06fc39dc25550de30948e215275666b14ce6ab8fc04ac1209d7f19 WatchSource:0}: Error finding container 5d3db3aeae06fc39dc25550de30948e215275666b14ce6ab8fc04ac1209d7f19: Status 404 returned error can't find the container with id 5d3db3aeae06fc39dc25550de30948e215275666b14ce6ab8fc04ac1209d7f19 Feb 01 14:35:06 crc kubenswrapper[4820]: I0201 14:35:06.930362 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.033997 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62"] Feb 01 14:35:07 crc kubenswrapper[4820]: W0201 14:35:07.042726 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752637ba_3ba3_4b87_89f4_d2a079743b70.slice/crio-b47c699497031211d72be69349507eeaf993a5309c71642ee11f2d1080c72d35 WatchSource:0}: Error finding container b47c699497031211d72be69349507eeaf993a5309c71642ee11f2d1080c72d35: Status 404 returned error can't find the container with id b47c699497031211d72be69349507eeaf993a5309c71642ee11f2d1080c72d35 Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.049677 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.174236 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.213949 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.213976 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.261493 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.261597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.262166 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.262260 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:08.262231391 +0000 UTC m=+849.782597765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "metrics-server-cert" not found Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.262661 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.262699 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:08.262686482 +0000 UTC m=+849.783052866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "webhook-server-cert" not found Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.371553 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.382085 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k"] Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.391192 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.219:5001/openstack-k8s-operators/manila-operator:e7306fdc5fd2a5c16ba0ccc6a65276e04b9a708e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l66lw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-d85699b78-lhgzd_openstack-operators(e5493c34-468b-4636-b236-9b5cb6e95de1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 01 14:35:07 crc kubenswrapper[4820]: W0201 14:35:07.392147 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc4d222_cdf7_4f68_b8b0_eb9a01b0cc12.slice/crio-557ea4115aeffc40c78af5ef7c0615075ec78a2176fb712fe95a23d77926b355 WatchSource:0}: Error finding container 557ea4115aeffc40c78af5ef7c0615075ec78a2176fb712fe95a23d77926b355: Status 404 returned error can't find the container with id 557ea4115aeffc40c78af5ef7c0615075ec78a2176fb712fe95a23d77926b355 Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.393481 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" podUID="e5493c34-468b-4636-b236-9b5cb6e95de1" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.395306 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844"] Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.396702 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmvbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-6lqdn_openstack-operators(7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.398284 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" podUID="7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.404012 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.411001 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.417303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.524220 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd"] Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.524992 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4dv9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-5dxhd_openstack-operators(9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.525081 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45xpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-mljht_openstack-operators(07b45e70-e382-4778-8d7e-f360ab63dcf9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.526207 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" podUID="9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.526267 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" podUID="07b45e70-e382-4778-8d7e-f360ab63dcf9" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.527562 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swhp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-c65qc_openstack-operators(2036e257-d3cc-48a2-bfc5-d3a262090c1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.528664 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" podUID="2036e257-d3cc-48a2-bfc5-d3a262090c1e" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.531548 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggsq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qwxns_openstack-operators(c560d3b1-caa9-4125-bc88-cdcdfb7ed651): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.532486 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62zgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-4mbst_openstack-operators(247ddb96-9b27-41f5-8890-8bf2dd175c36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.532742 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" podUID="c560d3b1-caa9-4125-bc88-cdcdfb7ed651" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.534618 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" podUID="247ddb96-9b27-41f5-8890-8bf2dd175c36" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.535862 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mljht"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.541722 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.547915 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4mbst"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.552743 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns"] Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.572154 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.572331 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.572403 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert podName:09ceaf4b-4a63-4ba6-9b77-ac550850ffe4 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:09.572384692 +0000 UTC m=+851.092750976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert") pod "infra-operator-controller-manager-79955696d6-tbjvq" (UID: "09ceaf4b-4a63-4ba6-9b77-ac550850ffe4") : secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.622091 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" event={"ID":"e5493c34-468b-4636-b236-9b5cb6e95de1","Type":"ContainerStarted","Data":"9128ff394fba32bbb291eb41e7a4b68ab397d83e165d9c085be8e21dad1b13b8"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.623452 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" event={"ID":"39a1a56a-4224-411d-95d1-49cf679d773e","Type":"ContainerStarted","Data":"2c5fc15888078ead09bd2849842feb245d7304e524fc40de25f8a107758003f5"} Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.623558 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.219:5001/openstack-k8s-operators/manila-operator:e7306fdc5fd2a5c16ba0ccc6a65276e04b9a708e\\\"\"" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" podUID="e5493c34-468b-4636-b236-9b5cb6e95de1" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.624862 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" event={"ID":"52e7e427-3656-4c7a-afbb-a7cbd89d9318","Type":"ContainerStarted","Data":"543784991ace99e44ce4beae063b096bc66936480446775694f6b55589e2e718"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.626431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" event={"ID":"7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12","Type":"ContainerStarted","Data":"557ea4115aeffc40c78af5ef7c0615075ec78a2176fb712fe95a23d77926b355"} Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.628411 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" podUID="7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.631992 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" event={"ID":"752637ba-3ba3-4b87-89f4-d2a079743b70","Type":"ContainerStarted","Data":"b47c699497031211d72be69349507eeaf993a5309c71642ee11f2d1080c72d35"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.633749 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" event={"ID":"7524b653-9c87-474e-b819-ebdd2864815c","Type":"ContainerStarted","Data":"1bbf12bfcc963126d5fb2c8f59ac5221c9697683bc327aac7fb3d464119f595b"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.636724 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" event={"ID":"90739996-10ed-4ad6-b704-012fafc3505f","Type":"ContainerStarted","Data":"5d3db3aeae06fc39dc25550de30948e215275666b14ce6ab8fc04ac1209d7f19"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.638723 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" event={"ID":"247ddb96-9b27-41f5-8890-8bf2dd175c36","Type":"ContainerStarted","Data":"98aa2bfbf8243a10d9571e1b1a925b151fdb5e0ee9184b0afbd4ba81a85498a0"} Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.641793 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" podUID="247ddb96-9b27-41f5-8890-8bf2dd175c36" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.642995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" event={"ID":"670d53e4-21ca-4ec7-b72b-0e938b2d85e8","Type":"ContainerStarted","Data":"9f31ee102595a6a5640024ecf77f7f83300c996ed8ae56fdbb7f4decb34a4b0c"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.644504 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" event={"ID":"4a8d372d-6d1c-4366-983c-947ccd5403e6","Type":"ContainerStarted","Data":"c6c1c3be36e402dbd4b3a2798c25d8b4bbf2f7fc196872be5f70c51a03f1894d"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.646443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" event={"ID":"07b45e70-e382-4778-8d7e-f360ab63dcf9","Type":"ContainerStarted","Data":"8cce0d6af9b247675323acbc8b9cbecb7846389094ddc26b33e0f5e3e965e336"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.647380 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" event={"ID":"ccd8a397-de84-44f7-ae90-f0e7a51f2d80","Type":"ContainerStarted","Data":"d66bc6708f0d9b6ec084ddd0db1795e66bdfe36456d7079d8e36a9a2026280d2"} Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.648329 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" podUID="07b45e70-e382-4778-8d7e-f360ab63dcf9" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.651808 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" event={"ID":"a56bcd98-4612-4d2a-b172-78d775c10b6a","Type":"ContainerStarted","Data":"46b8f007f47f55e41c531178b5b07bd28e18ccb9d0d404831dc2b56c3f7432ab"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.656085 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" event={"ID":"92bf88c4-d908-4d13-be4e-72936688113c","Type":"ContainerStarted","Data":"04f0cbf42b5896b3fbbc94c965b1e3fd5fe2410be169d18f9363ded86a64c940"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.657484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" event={"ID":"abe5d51e-a818-4e88-93d9-4f53a8d368b4","Type":"ContainerStarted","Data":"d5f9f1d3f20409f510069ff0984123508d4a350af1df3932cbbb7abc792df20c"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.661622 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" event={"ID":"2036e257-d3cc-48a2-bfc5-d3a262090c1e","Type":"ContainerStarted","Data":"fdd9b1cd07f4ccc42912baed3dfb42a0807e863ebc42a8d5cc561ad8dafd0d67"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.663362 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" event={"ID":"9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6","Type":"ContainerStarted","Data":"21193a2f8f026e9037370110e1b9067a3b1c86f843193e584d4a04916f23584a"} Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.663396 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" podUID="2036e257-d3cc-48a2-bfc5-d3a262090c1e" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.671110 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" podUID="9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.674209 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" event={"ID":"c560d3b1-caa9-4125-bc88-cdcdfb7ed651","Type":"ContainerStarted","Data":"7cd49fb776ca0735d00e2dcf7f4996f80a5ca0e1b33082b78f206bfa3669a2d0"} Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.677552 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" podUID="c560d3b1-caa9-4125-bc88-cdcdfb7ed651" Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.678533 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" event={"ID":"3f3881e5-dfdc-4d18-9d61-78439a69d0cb","Type":"ContainerStarted","Data":"b8b2d810bc91d0ec5bafd07219378b16668c05bb5eb97779eb4526be8dea824f"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.681137 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" event={"ID":"d4bd9854-7d67-4698-a00a-61ea442cc25b","Type":"ContainerStarted","Data":"d5451ba93b5d1203702e0e627bd4374179c60bdc4eecd7126cd1f03933cb4501"} Feb 01 14:35:07 crc kubenswrapper[4820]: I0201 14:35:07.876164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.876325 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:07 crc kubenswrapper[4820]: E0201 14:35:07.876624 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert podName:597a3429-85c5-4e47-983e-c77f2ccc22d3 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:09.87660598 +0000 UTC m=+851.396972264 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" (UID: "597a3429-85c5-4e47-983e-c77f2ccc22d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:08 crc kubenswrapper[4820]: I0201 14:35:08.282907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:08 crc kubenswrapper[4820]: I0201 14:35:08.283058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.284218 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.284283 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:10.284268763 +0000 UTC m=+851.804635047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "webhook-server-cert" not found Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.285057 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.285155 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:10.285135473 +0000 UTC m=+851.805501807 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "metrics-server-cert" not found Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.692201 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" podUID="07b45e70-e382-4778-8d7e-f360ab63dcf9" Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.692404 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" podUID="2036e257-d3cc-48a2-bfc5-d3a262090c1e" Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.692451 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" podUID="247ddb96-9b27-41f5-8890-8bf2dd175c36" Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.692971 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" podUID="7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12" Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.693603 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" podUID="9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6" Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.695008 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" podUID="c560d3b1-caa9-4125-bc88-cdcdfb7ed651" Feb 01 14:35:08 crc kubenswrapper[4820]: E0201 14:35:08.698031 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.219:5001/openstack-k8s-operators/manila-operator:e7306fdc5fd2a5c16ba0ccc6a65276e04b9a708e\\\"\"" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" podUID="e5493c34-468b-4636-b236-9b5cb6e95de1" Feb 01 14:35:09 crc kubenswrapper[4820]: I0201 14:35:09.600298 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:09 crc kubenswrapper[4820]: E0201 14:35:09.600458 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:09 crc kubenswrapper[4820]: E0201 14:35:09.600741 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert podName:09ceaf4b-4a63-4ba6-9b77-ac550850ffe4 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:13.600726483 +0000 UTC m=+855.121092767 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert") pod "infra-operator-controller-manager-79955696d6-tbjvq" (UID: "09ceaf4b-4a63-4ba6-9b77-ac550850ffe4") : secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:09 crc kubenswrapper[4820]: I0201 14:35:09.903708 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:09 crc kubenswrapper[4820]: E0201 14:35:09.903970 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:09 crc kubenswrapper[4820]: E0201 14:35:09.904069 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert podName:597a3429-85c5-4e47-983e-c77f2ccc22d3 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:13.904044429 +0000 UTC m=+855.424410713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" (UID: "597a3429-85c5-4e47-983e-c77f2ccc22d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:10 crc kubenswrapper[4820]: I0201 14:35:10.309348 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:10 crc kubenswrapper[4820]: I0201 14:35:10.309443 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:10 crc kubenswrapper[4820]: E0201 14:35:10.309553 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 01 14:35:10 crc kubenswrapper[4820]: E0201 14:35:10.309598 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 01 14:35:10 crc kubenswrapper[4820]: E0201 14:35:10.309649 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:14.30963169 +0000 UTC m=+855.829997974 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "webhook-server-cert" not found Feb 01 14:35:10 crc kubenswrapper[4820]: E0201 14:35:10.309667 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:14.309660911 +0000 UTC m=+855.830027195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "metrics-server-cert" not found Feb 01 14:35:13 crc kubenswrapper[4820]: I0201 14:35:13.657509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:13 crc kubenswrapper[4820]: E0201 14:35:13.657739 4820 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:13 crc kubenswrapper[4820]: E0201 14:35:13.658223 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert podName:09ceaf4b-4a63-4ba6-9b77-ac550850ffe4 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:21.658192493 +0000 UTC m=+863.178558807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert") pod "infra-operator-controller-manager-79955696d6-tbjvq" (UID: "09ceaf4b-4a63-4ba6-9b77-ac550850ffe4") : secret "infra-operator-webhook-server-cert" not found Feb 01 14:35:13 crc kubenswrapper[4820]: I0201 14:35:13.982987 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:13 crc kubenswrapper[4820]: E0201 14:35:13.983205 4820 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:13 crc kubenswrapper[4820]: E0201 14:35:13.983291 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert podName:597a3429-85c5-4e47-983e-c77f2ccc22d3 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:21.983267267 +0000 UTC m=+863.503633621 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" (UID: "597a3429-85c5-4e47-983e-c77f2ccc22d3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 01 14:35:14 crc kubenswrapper[4820]: I0201 14:35:14.388126 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:14 crc kubenswrapper[4820]: E0201 14:35:14.388234 4820 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 01 14:35:14 crc kubenswrapper[4820]: E0201 14:35:14.388542 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:22.388526452 +0000 UTC m=+863.908892736 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "metrics-server-cert" not found Feb 01 14:35:14 crc kubenswrapper[4820]: I0201 14:35:14.388668 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:14 crc kubenswrapper[4820]: E0201 14:35:14.389072 4820 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 01 14:35:14 crc kubenswrapper[4820]: E0201 14:35:14.389143 4820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs podName:b1b641e7-c3a8-4f7e-89c7-e362a3080f70 nodeName:}" failed. No retries permitted until 2026-02-01 14:35:22.389123626 +0000 UTC m=+863.909489990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs") pod "openstack-operator-controller-manager-86d788bc79-jdccp" (UID: "b1b641e7-c3a8-4f7e-89c7-e362a3080f70") : secret "webhook-server-cert" not found Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.257271 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cjxjw"] Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.259762 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.270137 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjxjw"] Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.418055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x898p\" (UniqueName: \"kubernetes.io/projected/6972bdb0-9870-475e-acc5-2da4ace45e58-kube-api-access-x898p\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.418159 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-catalog-content\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.418240 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-utilities\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.519476 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x898p\" (UniqueName: \"kubernetes.io/projected/6972bdb0-9870-475e-acc5-2da4ace45e58-kube-api-access-x898p\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.519536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-catalog-content\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.519600 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-utilities\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.520023 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-utilities\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.520189 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-catalog-content\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.542708 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x898p\" (UniqueName: \"kubernetes.io/projected/6972bdb0-9870-475e-acc5-2da4ace45e58-kube-api-access-x898p\") pod \"certified-operators-cjxjw\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:16 crc kubenswrapper[4820]: I0201 14:35:16.612702 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:20 crc kubenswrapper[4820]: E0201 14:35:20.491452 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Feb 01 14:35:20 crc kubenswrapper[4820]: E0201 14:35:20.492115 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xp2sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-nqvp8_openstack-operators(3f3881e5-dfdc-4d18-9d61-78439a69d0cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 14:35:20 crc kubenswrapper[4820]: E0201 14:35:20.493288 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" podUID="3f3881e5-dfdc-4d18-9d61-78439a69d0cb" Feb 01 14:35:20 crc kubenswrapper[4820]: E0201 14:35:20.779970 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" podUID="3f3881e5-dfdc-4d18-9d61-78439a69d0cb" Feb 01 14:35:21 crc kubenswrapper[4820]: E0201 14:35:21.011285 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 01 14:35:21 crc kubenswrapper[4820]: E0201 14:35:21.011457 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrmfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-x5p2q_openstack-operators(abe5d51e-a818-4e88-93d9-4f53a8d368b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 14:35:21 crc kubenswrapper[4820]: E0201 14:35:21.012832 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" podUID="abe5d51e-a818-4e88-93d9-4f53a8d368b4" Feb 01 14:35:21 crc kubenswrapper[4820]: E0201 14:35:21.533215 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 01 14:35:21 crc kubenswrapper[4820]: E0201 14:35:21.533768 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2pbhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-h5tlb_openstack-operators(7524b653-9c87-474e-b819-ebdd2864815c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 14:35:21 crc kubenswrapper[4820]: E0201 14:35:21.535468 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" podUID="7524b653-9c87-474e-b819-ebdd2864815c" Feb 01 14:35:21 crc kubenswrapper[4820]: I0201 14:35:21.686950 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:21 crc kubenswrapper[4820]: I0201 14:35:21.693334 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09ceaf4b-4a63-4ba6-9b77-ac550850ffe4-cert\") pod \"infra-operator-controller-manager-79955696d6-tbjvq\" (UID: \"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:21 crc kubenswrapper[4820]: E0201 14:35:21.788209 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" podUID="7524b653-9c87-474e-b819-ebdd2864815c" Feb 01 14:35:21 crc kubenswrapper[4820]: E0201 14:35:21.788245 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" podUID="abe5d51e-a818-4e88-93d9-4f53a8d368b4" Feb 01 14:35:21 crc kubenswrapper[4820]: I0201 14:35:21.906082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:21 crc kubenswrapper[4820]: I0201 14:35:21.990553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:21 crc kubenswrapper[4820]: I0201 14:35:21.996302 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/597a3429-85c5-4e47-983e-c77f2ccc22d3-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh\" (UID: \"597a3429-85c5-4e47-983e-c77f2ccc22d3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:22 crc kubenswrapper[4820]: E0201 14:35:22.090752 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 01 14:35:22 crc kubenswrapper[4820]: E0201 14:35:22.091050 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ls2qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-vn95k_openstack-operators(a56bcd98-4612-4d2a-b172-78d775c10b6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 14:35:22 crc kubenswrapper[4820]: E0201 14:35:22.092993 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" podUID="a56bcd98-4612-4d2a-b172-78d775c10b6a" Feb 01 14:35:22 crc kubenswrapper[4820]: I0201 14:35:22.127901 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:22 crc kubenswrapper[4820]: I0201 14:35:22.396169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:22 crc kubenswrapper[4820]: I0201 14:35:22.396272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:22 crc kubenswrapper[4820]: I0201 14:35:22.402007 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-webhook-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:22 crc kubenswrapper[4820]: I0201 14:35:22.405505 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1b641e7-c3a8-4f7e-89c7-e362a3080f70-metrics-certs\") pod \"openstack-operator-controller-manager-86d788bc79-jdccp\" (UID: \"b1b641e7-c3a8-4f7e-89c7-e362a3080f70\") " pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:22 crc kubenswrapper[4820]: I0201 14:35:22.445926 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:22 crc kubenswrapper[4820]: E0201 14:35:22.792433 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" podUID="a56bcd98-4612-4d2a-b172-78d775c10b6a" Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.373402 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjxjw"] Feb 01 14:35:24 crc kubenswrapper[4820]: W0201 14:35:24.495119 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6972bdb0_9870_475e_acc5_2da4ace45e58.slice/crio-8b423b09791f412a3d43eba3091a4fcb70b60cebae947b8ab11d973c55f0d782 WatchSource:0}: Error finding container 8b423b09791f412a3d43eba3091a4fcb70b60cebae947b8ab11d973c55f0d782: Status 404 returned error can't find the container with id 8b423b09791f412a3d43eba3091a4fcb70b60cebae947b8ab11d973c55f0d782 Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.823507 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" event={"ID":"52e7e427-3656-4c7a-afbb-a7cbd89d9318","Type":"ContainerStarted","Data":"1ba16810fa3350f37f59ee90803beb24facd536ca7d53fcf02001cc006dc9a73"} Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.824360 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.826607 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" event={"ID":"ccd8a397-de84-44f7-ae90-f0e7a51f2d80","Type":"ContainerStarted","Data":"276ba49fed3dab595f071f6f0e2dcbd79e4407e94e98b4e69f7e9043672b0b67"} Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.826731 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.834098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" event={"ID":"d4bd9854-7d67-4698-a00a-61ea442cc25b","Type":"ContainerStarted","Data":"b9af17993b3e9d7af99facf11eb7d64a38362995585e6910988b91a97f2b9f4f"} Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.834373 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.836691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjxjw" event={"ID":"6972bdb0-9870-475e-acc5-2da4ace45e58","Type":"ContainerStarted","Data":"8b423b09791f412a3d43eba3091a4fcb70b60cebae947b8ab11d973c55f0d782"} Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.867526 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" podStartSLOduration=3.577476254 podStartE2EDuration="19.867505075s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:06.734478629 +0000 UTC m=+848.254844913" lastFinishedPulling="2026-02-01 14:35:23.02450745 +0000 UTC m=+864.544873734" observedRunningTime="2026-02-01 14:35:24.841685396 +0000 UTC m=+866.362051690" watchObservedRunningTime="2026-02-01 14:35:24.867505075 +0000 UTC m=+866.387871359" Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.874797 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" podStartSLOduration=3.24034912 podStartE2EDuration="18.874776531s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.39009239 +0000 UTC m=+848.910458674" lastFinishedPulling="2026-02-01 14:35:23.024519801 +0000 UTC m=+864.544886085" observedRunningTime="2026-02-01 14:35:24.857970213 +0000 UTC m=+866.378336497" watchObservedRunningTime="2026-02-01 14:35:24.874776531 +0000 UTC m=+866.395142815" Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.898547 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" podStartSLOduration=4.061020431 podStartE2EDuration="19.898526889s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.186998672 +0000 UTC m=+848.707364956" lastFinishedPulling="2026-02-01 14:35:23.02450513 +0000 UTC m=+864.544871414" observedRunningTime="2026-02-01 14:35:24.892847261 +0000 UTC m=+866.413213545" watchObservedRunningTime="2026-02-01 14:35:24.898526889 +0000 UTC m=+866.418893173" Feb 01 14:35:24 crc kubenswrapper[4820]: I0201 14:35:24.944399 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp"] Feb 01 14:35:25 crc kubenswrapper[4820]: W0201 14:35:24.999778 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b641e7_c3a8_4f7e_89c7_e362a3080f70.slice/crio-0bc507907d6cfcc1dbe52a5fdf0e549a2f4f9e96c7656daddf27963a2902f94b WatchSource:0}: Error finding container 0bc507907d6cfcc1dbe52a5fdf0e549a2f4f9e96c7656daddf27963a2902f94b: Status 404 returned error can't find the container with id 0bc507907d6cfcc1dbe52a5fdf0e549a2f4f9e96c7656daddf27963a2902f94b Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.009184 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq"] Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.110319 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh"] Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.862005 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" event={"ID":"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4","Type":"ContainerStarted","Data":"eec4f8a62827c5af8cda3fe3a178ca902edaae8f2b09f2efe78de239cf537f5f"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.885530 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" event={"ID":"4a8d372d-6d1c-4366-983c-947ccd5403e6","Type":"ContainerStarted","Data":"1f0bea7a089f7ba31519ebf3446f777224a4376b162582f5974f2794364b8398"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.886421 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.891581 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" event={"ID":"39a1a56a-4224-411d-95d1-49cf679d773e","Type":"ContainerStarted","Data":"d363c266d47773bb9c41edb83101db0f7b495262d49f677ac0e87b521c1449a2"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.892006 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.893057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" event={"ID":"b1b641e7-c3a8-4f7e-89c7-e362a3080f70","Type":"ContainerStarted","Data":"0bc507907d6cfcc1dbe52a5fdf0e549a2f4f9e96c7656daddf27963a2902f94b"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.894737 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" event={"ID":"92bf88c4-d908-4d13-be4e-72936688113c","Type":"ContainerStarted","Data":"f5d2c2d416a27759cc0f4f3b5169f026ad90c3346df035b73778b7abe36466b2"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.895146 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.896839 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" event={"ID":"752637ba-3ba3-4b87-89f4-d2a079743b70","Type":"ContainerStarted","Data":"a93a82d4cfde52c4373446cbae263364ba2330588e23785e849f5164b8dfe4a3"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.897165 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.898490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" event={"ID":"247ddb96-9b27-41f5-8890-8bf2dd175c36","Type":"ContainerStarted","Data":"622361a9080c7329cc1c4e6b67ae230a52e9d1d150e7f378e1325436abf5b2a4"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.898684 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.901103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" event={"ID":"90739996-10ed-4ad6-b704-012fafc3505f","Type":"ContainerStarted","Data":"517999d722d48a2b51dfa0eea97f25217344e939db1fb2d16b5de67e5f14738b"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.901747 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.910054 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" event={"ID":"c560d3b1-caa9-4125-bc88-cdcdfb7ed651","Type":"ContainerStarted","Data":"55aec0a35fd3926df729cf01b40ff2a1f3f7b293e6184ce2d0df41c15f9b4615"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.912380 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" podStartSLOduration=4.708520403 podStartE2EDuration="20.910899875s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:06.822220742 +0000 UTC m=+848.342587026" lastFinishedPulling="2026-02-01 14:35:23.024600204 +0000 UTC m=+864.544966498" observedRunningTime="2026-02-01 14:35:25.900589175 +0000 UTC m=+867.420955459" watchObservedRunningTime="2026-02-01 14:35:25.910899875 +0000 UTC m=+867.431266169" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.918365 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" event={"ID":"9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6","Type":"ContainerStarted","Data":"12eea11168ccaa6330751509a0e7be6d5c8854339d91ecf8218589652b55f345"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.918714 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.925311 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" podStartSLOduration=5.114384633 podStartE2EDuration="20.925291205s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.213768363 +0000 UTC m=+848.734134647" lastFinishedPulling="2026-02-01 14:35:23.024674935 +0000 UTC m=+864.545041219" observedRunningTime="2026-02-01 14:35:25.917760192 +0000 UTC m=+867.438126476" watchObservedRunningTime="2026-02-01 14:35:25.925291205 +0000 UTC m=+867.445657479" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.928355 4820 generic.go:334] "Generic (PLEG): container finished" podID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerID="61e5b3ff6ee80b351d025e1a0491abfe7359523c5e681c91cae7e2c2895a584e" exitCode=0 Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.928434 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjxjw" event={"ID":"6972bdb0-9870-475e-acc5-2da4ace45e58","Type":"ContainerDied","Data":"61e5b3ff6ee80b351d025e1a0491abfe7359523c5e681c91cae7e2c2895a584e"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.931546 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" event={"ID":"670d53e4-21ca-4ec7-b72b-0e938b2d85e8","Type":"ContainerStarted","Data":"138acee480c76105fbc64c6d81dcb48a561f115d064e7adb34a184cfe1368cfa"} Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.931588 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.936267 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" podStartSLOduration=2.923311651 podStartE2EDuration="19.936254002s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.53238056 +0000 UTC m=+849.052746834" lastFinishedPulling="2026-02-01 14:35:24.545322901 +0000 UTC m=+866.065689185" observedRunningTime="2026-02-01 14:35:25.935288468 +0000 UTC m=+867.455654752" watchObservedRunningTime="2026-02-01 14:35:25.936254002 +0000 UTC m=+867.456620286" Feb 01 14:35:25 crc kubenswrapper[4820]: I0201 14:35:25.985454 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" podStartSLOduration=5.30426797 podStartE2EDuration="20.985437618s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:06.88711834 +0000 UTC m=+848.407484624" lastFinishedPulling="2026-02-01 14:35:22.568287988 +0000 UTC m=+864.088654272" observedRunningTime="2026-02-01 14:35:25.966275442 +0000 UTC m=+867.486641746" watchObservedRunningTime="2026-02-01 14:35:25.985437618 +0000 UTC m=+867.505803902" Feb 01 14:35:26 crc kubenswrapper[4820]: I0201 14:35:26.007537 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qwxns" podStartSLOduration=2.804953884 podStartE2EDuration="20.007521265s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.531376996 +0000 UTC m=+849.051743280" lastFinishedPulling="2026-02-01 14:35:24.733944377 +0000 UTC m=+866.254310661" observedRunningTime="2026-02-01 14:35:26.006966922 +0000 UTC m=+867.527333206" watchObservedRunningTime="2026-02-01 14:35:26.007521265 +0000 UTC m=+867.527887549" Feb 01 14:35:26 crc kubenswrapper[4820]: I0201 14:35:26.016248 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" podStartSLOduration=5.039796008 podStartE2EDuration="21.016228276s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.04834681 +0000 UTC m=+848.568713094" lastFinishedPulling="2026-02-01 14:35:23.024779088 +0000 UTC m=+864.545145362" observedRunningTime="2026-02-01 14:35:25.984466715 +0000 UTC m=+867.504832999" watchObservedRunningTime="2026-02-01 14:35:26.016228276 +0000 UTC m=+867.536594560" Feb 01 14:35:26 crc kubenswrapper[4820]: I0201 14:35:26.051064 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" podStartSLOduration=5.09616266 podStartE2EDuration="21.051045413s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.069647788 +0000 UTC m=+848.590014072" lastFinishedPulling="2026-02-01 14:35:23.024530541 +0000 UTC m=+864.544896825" observedRunningTime="2026-02-01 14:35:26.047296451 +0000 UTC m=+867.567662735" watchObservedRunningTime="2026-02-01 14:35:26.051045413 +0000 UTC m=+867.571411697" Feb 01 14:35:26 crc kubenswrapper[4820]: I0201 14:35:26.069312 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" podStartSLOduration=3.05941632 podStartE2EDuration="20.069297537s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.524887967 +0000 UTC m=+849.045254251" lastFinishedPulling="2026-02-01 14:35:24.534769184 +0000 UTC m=+866.055135468" observedRunningTime="2026-02-01 14:35:26.068800895 +0000 UTC m=+867.589167179" watchObservedRunningTime="2026-02-01 14:35:26.069297537 +0000 UTC m=+867.589663821" Feb 01 14:35:27 crc kubenswrapper[4820]: I0201 14:35:27.949352 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" event={"ID":"597a3429-85c5-4e47-983e-c77f2ccc22d3","Type":"ContainerStarted","Data":"2ed96738979932da030d7bebaced2097a532d7dfdbadbc5d8ec014e256ebd1d4"} Feb 01 14:35:27 crc kubenswrapper[4820]: I0201 14:35:27.952134 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" event={"ID":"b1b641e7-c3a8-4f7e-89c7-e362a3080f70","Type":"ContainerStarted","Data":"eea603bde4fbc2e8a2f3ee87919c4572be1ecf7b08d2685414d69541ef804833"} Feb 01 14:35:27 crc kubenswrapper[4820]: I0201 14:35:27.974094 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" podStartSLOduration=21.974074103 podStartE2EDuration="21.974074103s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:35:27.972555935 +0000 UTC m=+869.492922239" watchObservedRunningTime="2026-02-01 14:35:27.974074103 +0000 UTC m=+869.494440387" Feb 01 14:35:27 crc kubenswrapper[4820]: I0201 14:35:27.978337 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" podStartSLOduration=6.834597421 podStartE2EDuration="22.978320896s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:06.880863848 +0000 UTC m=+848.401230132" lastFinishedPulling="2026-02-01 14:35:23.024587323 +0000 UTC m=+864.544953607" observedRunningTime="2026-02-01 14:35:26.08918784 +0000 UTC m=+867.609554124" watchObservedRunningTime="2026-02-01 14:35:27.978320896 +0000 UTC m=+869.498687180" Feb 01 14:35:28 crc kubenswrapper[4820]: I0201 14:35:28.958555 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.972946 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" event={"ID":"597a3429-85c5-4e47-983e-c77f2ccc22d3","Type":"ContainerStarted","Data":"ae3114b1daa69e2090ace01cbef865de28ee937ff84eb790905efe08786bf4aa"} Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.973492 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.975028 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjxjw" event={"ID":"6972bdb0-9870-475e-acc5-2da4ace45e58","Type":"ContainerStarted","Data":"f1cb2549bb9ab2b3a4594751cfbd0ca92e2a2f68bb4afab343763143637adb40"} Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.976436 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" event={"ID":"7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12","Type":"ContainerStarted","Data":"da5c8b27ba6a4367f12883ccd5d6823411653c6406ec4156e69f5ced74898b75"} Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.977823 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" event={"ID":"09ceaf4b-4a63-4ba6-9b77-ac550850ffe4","Type":"ContainerStarted","Data":"22316bfbdeac32684557195b0e3a89d33b27e5062bcb2659e859b0c750759178"} Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.978183 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.979312 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" event={"ID":"2036e257-d3cc-48a2-bfc5-d3a262090c1e","Type":"ContainerStarted","Data":"2608845c49c7c0264cc142b9a9779a8fdae43b106365576e6401634e51dcd9e3"} Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.979460 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.980736 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" event={"ID":"e5493c34-468b-4636-b236-9b5cb6e95de1","Type":"ContainerStarted","Data":"e9cfca3e389f99378118c1a646c695b36be098d4024a8dc7b1efc03614099df9"} Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.980919 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.981931 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" event={"ID":"07b45e70-e382-4778-8d7e-f360ab63dcf9","Type":"ContainerStarted","Data":"cf75cde085a9d0d0f44e623ed9b590f33b9daa877404828c1480ab8140590e04"} Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.982100 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" Feb 01 14:35:30 crc kubenswrapper[4820]: I0201 14:35:30.997175 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" podStartSLOduration=22.459311311 podStartE2EDuration="24.997159151s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="2026-02-01 14:35:27.43564156 +0000 UTC m=+868.956007844" lastFinishedPulling="2026-02-01 14:35:29.97348941 +0000 UTC m=+871.493855684" observedRunningTime="2026-02-01 14:35:30.995440329 +0000 UTC m=+872.515806613" watchObservedRunningTime="2026-02-01 14:35:30.997159151 +0000 UTC m=+872.517525435" Feb 01 14:35:31 crc kubenswrapper[4820]: I0201 14:35:31.037898 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" podStartSLOduration=3.536389393 podStartE2EDuration="26.037864441s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.391016943 +0000 UTC m=+848.911383227" lastFinishedPulling="2026-02-01 14:35:29.892491991 +0000 UTC m=+871.412858275" observedRunningTime="2026-02-01 14:35:31.015297612 +0000 UTC m=+872.535663886" watchObservedRunningTime="2026-02-01 14:35:31.037864441 +0000 UTC m=+872.558230725" Feb 01 14:35:31 crc kubenswrapper[4820]: I0201 14:35:31.038070 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" podStartSLOduration=2.613466998 podStartE2EDuration="25.038062136s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.525014621 +0000 UTC m=+849.045380905" lastFinishedPulling="2026-02-01 14:35:29.949609759 +0000 UTC m=+871.469976043" observedRunningTime="2026-02-01 14:35:31.031294711 +0000 UTC m=+872.551660995" watchObservedRunningTime="2026-02-01 14:35:31.038062136 +0000 UTC m=+872.558428420" Feb 01 14:35:31 crc kubenswrapper[4820]: I0201 14:35:31.071198 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" podStartSLOduration=21.157754068 podStartE2EDuration="26.071175971s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:25.060723353 +0000 UTC m=+866.581089637" lastFinishedPulling="2026-02-01 14:35:29.974145256 +0000 UTC m=+871.494511540" observedRunningTime="2026-02-01 14:35:31.064075088 +0000 UTC m=+872.584441382" watchObservedRunningTime="2026-02-01 14:35:31.071175971 +0000 UTC m=+872.591542255" Feb 01 14:35:31 crc kubenswrapper[4820]: I0201 14:35:31.086804 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" podStartSLOduration=3.590671394 podStartE2EDuration="26.086787331s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.396470336 +0000 UTC m=+848.916836620" lastFinishedPulling="2026-02-01 14:35:29.892586273 +0000 UTC m=+871.412952557" observedRunningTime="2026-02-01 14:35:31.082551888 +0000 UTC m=+872.602918182" watchObservedRunningTime="2026-02-01 14:35:31.086787331 +0000 UTC m=+872.607153615" Feb 01 14:35:31 crc kubenswrapper[4820]: I0201 14:35:31.097429 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" podStartSLOduration=2.732412259 podStartE2EDuration="25.097409928s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.527482861 +0000 UTC m=+849.047849135" lastFinishedPulling="2026-02-01 14:35:29.89248052 +0000 UTC m=+871.412846804" observedRunningTime="2026-02-01 14:35:31.094549599 +0000 UTC m=+872.614915883" watchObservedRunningTime="2026-02-01 14:35:31.097409928 +0000 UTC m=+872.617776212" Feb 01 14:35:31 crc kubenswrapper[4820]: I0201 14:35:31.988496 4820 generic.go:334] "Generic (PLEG): container finished" podID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerID="f1cb2549bb9ab2b3a4594751cfbd0ca92e2a2f68bb4afab343763143637adb40" exitCode=0 Feb 01 14:35:31 crc kubenswrapper[4820]: I0201 14:35:31.988592 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjxjw" event={"ID":"6972bdb0-9870-475e-acc5-2da4ace45e58","Type":"ContainerDied","Data":"f1cb2549bb9ab2b3a4594751cfbd0ca92e2a2f68bb4afab343763143637adb40"} Feb 01 14:35:32 crc kubenswrapper[4820]: I0201 14:35:32.452114 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86d788bc79-jdccp" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.112895 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wnzxh" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.121064 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-9fz5b" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.184151 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rf496" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.205098 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-slbdv" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.220994 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-h9f62" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.284696 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fqfvc" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.332457 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9s78" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.404444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-d85699b78-lhgzd" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.435493 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-54ttw" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.501485 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.503563 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-6lqdn" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.548500 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mljht" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.568218 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-g7844" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.628463 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-c65qc" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.753481 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-5dxhd" Feb 01 14:35:36 crc kubenswrapper[4820]: I0201 14:35:36.775592 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-4mbst" Feb 01 14:35:44 crc kubenswrapper[4820]: I0201 14:35:41.911232 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tbjvq" Feb 01 14:35:44 crc kubenswrapper[4820]: I0201 14:35:42.133082 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.076168 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" event={"ID":"7524b653-9c87-474e-b819-ebdd2864815c","Type":"ContainerStarted","Data":"a039f98534982520a9d511dea7b765658762126dcec52e53ebf36206548ab1a5"} Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.077151 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.077517 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" event={"ID":"3f3881e5-dfdc-4d18-9d61-78439a69d0cb","Type":"ContainerStarted","Data":"bff1c5c7de083f6a65b9b43679c4a9498e60d786e1b9cf6814b69731c4e0b1ce"} Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.077819 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.079352 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjxjw" event={"ID":"6972bdb0-9870-475e-acc5-2da4ace45e58","Type":"ContainerStarted","Data":"47ce564b5ce59e66d76ac944e67ed8d66cc8e97cc3438f38d5a5549f29bf21db"} Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.080706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" event={"ID":"a56bcd98-4612-4d2a-b172-78d775c10b6a","Type":"ContainerStarted","Data":"f13234ff77a09662cefc9c9027ec038da32a13c21b1fd01f9d31fa85167b8b86"} Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.081128 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.082151 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" event={"ID":"abe5d51e-a818-4e88-93d9-4f53a8d368b4","Type":"ContainerStarted","Data":"247de632653cea53385e35e1d1010fe3832b7b9590d28dfc4a02a832a60585d4"} Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.082471 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.122172 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" podStartSLOduration=2.062361617 podStartE2EDuration="39.12215338s" podCreationTimestamp="2026-02-01 14:35:06 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.388368588 +0000 UTC m=+848.908734872" lastFinishedPulling="2026-02-01 14:35:44.448160351 +0000 UTC m=+885.968526635" observedRunningTime="2026-02-01 14:35:45.11972105 +0000 UTC m=+886.640087334" watchObservedRunningTime="2026-02-01 14:35:45.12215338 +0000 UTC m=+886.642519654" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.125165 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" podStartSLOduration=3.065242027 podStartE2EDuration="40.125154532s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.388891551 +0000 UTC m=+848.909257835" lastFinishedPulling="2026-02-01 14:35:44.448804056 +0000 UTC m=+885.969170340" observedRunningTime="2026-02-01 14:35:45.10615702 +0000 UTC m=+886.626523304" watchObservedRunningTime="2026-02-01 14:35:45.125154532 +0000 UTC m=+886.645520816" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.136569 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" podStartSLOduration=3.062996202 podStartE2EDuration="40.136552729s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.375116246 +0000 UTC m=+848.895482530" lastFinishedPulling="2026-02-01 14:35:44.448672773 +0000 UTC m=+885.969039057" observedRunningTime="2026-02-01 14:35:45.135141105 +0000 UTC m=+886.655507469" watchObservedRunningTime="2026-02-01 14:35:45.136552729 +0000 UTC m=+886.656919013" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.152129 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cjxjw" podStartSLOduration=11.465563928 podStartE2EDuration="29.152107878s" podCreationTimestamp="2026-02-01 14:35:16 +0000 UTC" firstStartedPulling="2026-02-01 14:35:26.762249646 +0000 UTC m=+868.282615930" lastFinishedPulling="2026-02-01 14:35:44.448793576 +0000 UTC m=+885.969159880" observedRunningTime="2026-02-01 14:35:45.148762827 +0000 UTC m=+886.669129131" watchObservedRunningTime="2026-02-01 14:35:45.152107878 +0000 UTC m=+886.672474172" Feb 01 14:35:45 crc kubenswrapper[4820]: I0201 14:35:45.172932 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" podStartSLOduration=2.936507396 podStartE2EDuration="40.172912043s" podCreationTimestamp="2026-02-01 14:35:05 +0000 UTC" firstStartedPulling="2026-02-01 14:35:07.212123333 +0000 UTC m=+848.732489627" lastFinishedPulling="2026-02-01 14:35:44.44852798 +0000 UTC m=+885.968894274" observedRunningTime="2026-02-01 14:35:45.167405189 +0000 UTC m=+886.687771483" watchObservedRunningTime="2026-02-01 14:35:45.172912043 +0000 UTC m=+886.693278327" Feb 01 14:35:46 crc kubenswrapper[4820]: I0201 14:35:46.613240 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:46 crc kubenswrapper[4820]: I0201 14:35:46.613300 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:46 crc kubenswrapper[4820]: I0201 14:35:46.670063 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.653174 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ndm5w"] Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.654861 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.677987 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndm5w"] Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.701484 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-catalog-content\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.701568 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5js\" (UniqueName: \"kubernetes.io/projected/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-kube-api-access-zf5js\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.701623 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-utilities\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.802769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf5js\" (UniqueName: \"kubernetes.io/projected/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-kube-api-access-zf5js\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.802841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-utilities\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.802943 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-catalog-content\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.803367 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-utilities\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.803488 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-catalog-content\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.829117 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf5js\" (UniqueName: \"kubernetes.io/projected/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-kube-api-access-zf5js\") pod \"redhat-operators-ndm5w\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:50 crc kubenswrapper[4820]: I0201 14:35:50.974658 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:35:51 crc kubenswrapper[4820]: I0201 14:35:51.222930 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndm5w"] Feb 01 14:35:52 crc kubenswrapper[4820]: I0201 14:35:52.133584 4820 generic.go:334] "Generic (PLEG): container finished" podID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerID="3cf231458698367dfaf7f39c8c3879587bc0a56be67ff242729716ef689c1a00" exitCode=0 Feb 01 14:35:52 crc kubenswrapper[4820]: I0201 14:35:52.134587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndm5w" event={"ID":"ba5fd2e4-a1c2-448b-a69b-25fced202a4b","Type":"ContainerDied","Data":"3cf231458698367dfaf7f39c8c3879587bc0a56be67ff242729716ef689c1a00"} Feb 01 14:35:52 crc kubenswrapper[4820]: I0201 14:35:52.134700 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndm5w" event={"ID":"ba5fd2e4-a1c2-448b-a69b-25fced202a4b","Type":"ContainerStarted","Data":"4b3dfc758786e402e128f5e00b691c2f381d3e8c5d2238eb6d2b5495509ba0c4"} Feb 01 14:35:53 crc kubenswrapper[4820]: I0201 14:35:53.145498 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndm5w" event={"ID":"ba5fd2e4-a1c2-448b-a69b-25fced202a4b","Type":"ContainerStarted","Data":"18330f382f865921b6229fa7d9ec92cd71546df92ba66951026d09cbf4abd1c0"} Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.156101 4820 generic.go:334] "Generic (PLEG): container finished" podID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerID="18330f382f865921b6229fa7d9ec92cd71546df92ba66951026d09cbf4abd1c0" exitCode=0 Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.156198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndm5w" event={"ID":"ba5fd2e4-a1c2-448b-a69b-25fced202a4b","Type":"ContainerDied","Data":"18330f382f865921b6229fa7d9ec92cd71546df92ba66951026d09cbf4abd1c0"} Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.617899 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4qds"] Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.620412 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.631344 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4qds"] Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.779481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-catalog-content\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.779594 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26j4s\" (UniqueName: \"kubernetes.io/projected/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-kube-api-access-26j4s\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.780208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-utilities\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.881765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26j4s\" (UniqueName: \"kubernetes.io/projected/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-kube-api-access-26j4s\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.881828 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-utilities\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.881906 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-catalog-content\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.882457 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-catalog-content\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.882487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-utilities\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:54 crc kubenswrapper[4820]: I0201 14:35:54.902430 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26j4s\" (UniqueName: \"kubernetes.io/projected/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-kube-api-access-26j4s\") pod \"community-operators-c4qds\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:55 crc kubenswrapper[4820]: I0201 14:35:55.005202 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:35:55 crc kubenswrapper[4820]: I0201 14:35:55.183375 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndm5w" event={"ID":"ba5fd2e4-a1c2-448b-a69b-25fced202a4b","Type":"ContainerStarted","Data":"f51be7755b2a967cb640720ac5434a2cd8927e5ccda25071498ad41883d599e8"} Feb 01 14:35:55 crc kubenswrapper[4820]: I0201 14:35:55.245565 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ndm5w" podStartSLOduration=2.808573091 podStartE2EDuration="5.245545536s" podCreationTimestamp="2026-02-01 14:35:50 +0000 UTC" firstStartedPulling="2026-02-01 14:35:52.1353324 +0000 UTC m=+893.655698684" lastFinishedPulling="2026-02-01 14:35:54.572304795 +0000 UTC m=+896.092671129" observedRunningTime="2026-02-01 14:35:55.227079157 +0000 UTC m=+896.747445441" watchObservedRunningTime="2026-02-01 14:35:55.245545536 +0000 UTC m=+896.765911820" Feb 01 14:35:55 crc kubenswrapper[4820]: I0201 14:35:55.390330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4qds"] Feb 01 14:35:56 crc kubenswrapper[4820]: I0201 14:35:56.191198 4820 generic.go:334] "Generic (PLEG): container finished" podID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerID="585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738" exitCode=0 Feb 01 14:35:56 crc kubenswrapper[4820]: I0201 14:35:56.192331 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4qds" event={"ID":"36770a7b-a53b-4d5e-afa7-0e3e91c77d57","Type":"ContainerDied","Data":"585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738"} Feb 01 14:35:56 crc kubenswrapper[4820]: I0201 14:35:56.192364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4qds" event={"ID":"36770a7b-a53b-4d5e-afa7-0e3e91c77d57","Type":"ContainerStarted","Data":"b5f1e2846c7a6616d130198e96ffc89cc3ccddb54967c636f3ec117a0cb22197"} Feb 01 14:35:56 crc kubenswrapper[4820]: I0201 14:35:56.392995 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-x5p2q" Feb 01 14:35:56 crc kubenswrapper[4820]: I0201 14:35:56.420156 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h5tlb" Feb 01 14:35:56 crc kubenswrapper[4820]: I0201 14:35:56.484528 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-vn95k" Feb 01 14:35:56 crc kubenswrapper[4820]: I0201 14:35:56.576860 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-nqvp8" Feb 01 14:35:56 crc kubenswrapper[4820]: I0201 14:35:56.679594 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:35:57 crc kubenswrapper[4820]: I0201 14:35:57.200331 4820 generic.go:334] "Generic (PLEG): container finished" podID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerID="eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945" exitCode=0 Feb 01 14:35:57 crc kubenswrapper[4820]: I0201 14:35:57.208457 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4qds" event={"ID":"36770a7b-a53b-4d5e-afa7-0e3e91c77d57","Type":"ContainerDied","Data":"eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945"} Feb 01 14:35:58 crc kubenswrapper[4820]: I0201 14:35:58.209915 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4qds" event={"ID":"36770a7b-a53b-4d5e-afa7-0e3e91c77d57","Type":"ContainerStarted","Data":"db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a"} Feb 01 14:35:58 crc kubenswrapper[4820]: I0201 14:35:58.231170 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4qds" podStartSLOduration=2.855723299 podStartE2EDuration="4.231149443s" podCreationTimestamp="2026-02-01 14:35:54 +0000 UTC" firstStartedPulling="2026-02-01 14:35:56.192931483 +0000 UTC m=+897.713297767" lastFinishedPulling="2026-02-01 14:35:57.568357627 +0000 UTC m=+899.088723911" observedRunningTime="2026-02-01 14:35:58.226153602 +0000 UTC m=+899.746519886" watchObservedRunningTime="2026-02-01 14:35:58.231149443 +0000 UTC m=+899.751515727" Feb 01 14:35:59 crc kubenswrapper[4820]: I0201 14:35:59.590448 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjxjw"] Feb 01 14:35:59 crc kubenswrapper[4820]: I0201 14:35:59.590709 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cjxjw" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerName="registry-server" containerID="cri-o://47ce564b5ce59e66d76ac944e67ed8d66cc8e97cc3438f38d5a5549f29bf21db" gracePeriod=2 Feb 01 14:36:00 crc kubenswrapper[4820]: I0201 14:36:00.974934 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:36:00 crc kubenswrapper[4820]: I0201 14:36:00.975319 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.039520 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.229923 4820 generic.go:334] "Generic (PLEG): container finished" podID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerID="47ce564b5ce59e66d76ac944e67ed8d66cc8e97cc3438f38d5a5549f29bf21db" exitCode=0 Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.230852 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjxjw" event={"ID":"6972bdb0-9870-475e-acc5-2da4ace45e58","Type":"ContainerDied","Data":"47ce564b5ce59e66d76ac944e67ed8d66cc8e97cc3438f38d5a5549f29bf21db"} Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.280613 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.350493 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.472512 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-utilities\") pod \"6972bdb0-9870-475e-acc5-2da4ace45e58\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.472588 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x898p\" (UniqueName: \"kubernetes.io/projected/6972bdb0-9870-475e-acc5-2da4ace45e58-kube-api-access-x898p\") pod \"6972bdb0-9870-475e-acc5-2da4ace45e58\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.472736 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-catalog-content\") pod \"6972bdb0-9870-475e-acc5-2da4ace45e58\" (UID: \"6972bdb0-9870-475e-acc5-2da4ace45e58\") " Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.473403 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-utilities" (OuterVolumeSpecName: "utilities") pod "6972bdb0-9870-475e-acc5-2da4ace45e58" (UID: "6972bdb0-9870-475e-acc5-2da4ace45e58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.480606 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6972bdb0-9870-475e-acc5-2da4ace45e58-kube-api-access-x898p" (OuterVolumeSpecName: "kube-api-access-x898p") pod "6972bdb0-9870-475e-acc5-2da4ace45e58" (UID: "6972bdb0-9870-475e-acc5-2da4ace45e58"). InnerVolumeSpecName "kube-api-access-x898p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.518591 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6972bdb0-9870-475e-acc5-2da4ace45e58" (UID: "6972bdb0-9870-475e-acc5-2da4ace45e58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.575786 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.575838 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6972bdb0-9870-475e-acc5-2da4ace45e58-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:01 crc kubenswrapper[4820]: I0201 14:36:01.575851 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x898p\" (UniqueName: \"kubernetes.io/projected/6972bdb0-9870-475e-acc5-2da4ace45e58-kube-api-access-x898p\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:02 crc kubenswrapper[4820]: I0201 14:36:02.240474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjxjw" event={"ID":"6972bdb0-9870-475e-acc5-2da4ace45e58","Type":"ContainerDied","Data":"8b423b09791f412a3d43eba3091a4fcb70b60cebae947b8ab11d973c55f0d782"} Feb 01 14:36:02 crc kubenswrapper[4820]: I0201 14:36:02.240560 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjxjw" Feb 01 14:36:02 crc kubenswrapper[4820]: I0201 14:36:02.240900 4820 scope.go:117] "RemoveContainer" containerID="47ce564b5ce59e66d76ac944e67ed8d66cc8e97cc3438f38d5a5549f29bf21db" Feb 01 14:36:02 crc kubenswrapper[4820]: I0201 14:36:02.257698 4820 scope.go:117] "RemoveContainer" containerID="f1cb2549bb9ab2b3a4594751cfbd0ca92e2a2f68bb4afab343763143637adb40" Feb 01 14:36:02 crc kubenswrapper[4820]: I0201 14:36:02.270556 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjxjw"] Feb 01 14:36:02 crc kubenswrapper[4820]: I0201 14:36:02.275254 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cjxjw"] Feb 01 14:36:02 crc kubenswrapper[4820]: I0201 14:36:02.300606 4820 scope.go:117] "RemoveContainer" containerID="61e5b3ff6ee80b351d025e1a0491abfe7359523c5e681c91cae7e2c2895a584e" Feb 01 14:36:03 crc kubenswrapper[4820]: I0201 14:36:03.205745 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" path="/var/lib/kubelet/pods/6972bdb0-9870-475e-acc5-2da4ace45e58/volumes" Feb 01 14:36:03 crc kubenswrapper[4820]: I0201 14:36:03.988683 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ndm5w"] Feb 01 14:36:03 crc kubenswrapper[4820]: I0201 14:36:03.988916 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ndm5w" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerName="registry-server" containerID="cri-o://f51be7755b2a967cb640720ac5434a2cd8927e5ccda25071498ad41883d599e8" gracePeriod=2 Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.006629 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.006926 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.042924 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.261260 4820 generic.go:334] "Generic (PLEG): container finished" podID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerID="f51be7755b2a967cb640720ac5434a2cd8927e5ccda25071498ad41883d599e8" exitCode=0 Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.261325 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndm5w" event={"ID":"ba5fd2e4-a1c2-448b-a69b-25fced202a4b","Type":"ContainerDied","Data":"f51be7755b2a967cb640720ac5434a2cd8927e5ccda25071498ad41883d599e8"} Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.301372 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.365850 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.547001 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-utilities\") pod \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.547075 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf5js\" (UniqueName: \"kubernetes.io/projected/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-kube-api-access-zf5js\") pod \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.547178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-catalog-content\") pod \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\" (UID: \"ba5fd2e4-a1c2-448b-a69b-25fced202a4b\") " Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.547735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-utilities" (OuterVolumeSpecName: "utilities") pod "ba5fd2e4-a1c2-448b-a69b-25fced202a4b" (UID: "ba5fd2e4-a1c2-448b-a69b-25fced202a4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.552652 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-kube-api-access-zf5js" (OuterVolumeSpecName: "kube-api-access-zf5js") pod "ba5fd2e4-a1c2-448b-a69b-25fced202a4b" (UID: "ba5fd2e4-a1c2-448b-a69b-25fced202a4b"). InnerVolumeSpecName "kube-api-access-zf5js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.649127 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.649166 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf5js\" (UniqueName: \"kubernetes.io/projected/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-kube-api-access-zf5js\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.690825 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba5fd2e4-a1c2-448b-a69b-25fced202a4b" (UID: "ba5fd2e4-a1c2-448b-a69b-25fced202a4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:36:05 crc kubenswrapper[4820]: I0201 14:36:05.750653 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba5fd2e4-a1c2-448b-a69b-25fced202a4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:06 crc kubenswrapper[4820]: I0201 14:36:06.272111 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndm5w" event={"ID":"ba5fd2e4-a1c2-448b-a69b-25fced202a4b","Type":"ContainerDied","Data":"4b3dfc758786e402e128f5e00b691c2f381d3e8c5d2238eb6d2b5495509ba0c4"} Feb 01 14:36:06 crc kubenswrapper[4820]: I0201 14:36:06.272201 4820 scope.go:117] "RemoveContainer" containerID="f51be7755b2a967cb640720ac5434a2cd8927e5ccda25071498ad41883d599e8" Feb 01 14:36:06 crc kubenswrapper[4820]: I0201 14:36:06.272146 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndm5w" Feb 01 14:36:06 crc kubenswrapper[4820]: I0201 14:36:06.309199 4820 scope.go:117] "RemoveContainer" containerID="18330f382f865921b6229fa7d9ec92cd71546df92ba66951026d09cbf4abd1c0" Feb 01 14:36:06 crc kubenswrapper[4820]: I0201 14:36:06.330453 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ndm5w"] Feb 01 14:36:06 crc kubenswrapper[4820]: I0201 14:36:06.337905 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ndm5w"] Feb 01 14:36:06 crc kubenswrapper[4820]: I0201 14:36:06.345738 4820 scope.go:117] "RemoveContainer" containerID="3cf231458698367dfaf7f39c8c3879587bc0a56be67ff242729716ef689c1a00" Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.207124 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" path="/var/lib/kubelet/pods/ba5fd2e4-a1c2-448b-a69b-25fced202a4b/volumes" Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.391525 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4qds"] Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.391792 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4qds" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerName="registry-server" containerID="cri-o://db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a" gracePeriod=2 Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.856954 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.983332 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-utilities\") pod \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.984193 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-utilities" (OuterVolumeSpecName: "utilities") pod "36770a7b-a53b-4d5e-afa7-0e3e91c77d57" (UID: "36770a7b-a53b-4d5e-afa7-0e3e91c77d57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.984248 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-catalog-content\") pod \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.984281 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26j4s\" (UniqueName: \"kubernetes.io/projected/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-kube-api-access-26j4s\") pod \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\" (UID: \"36770a7b-a53b-4d5e-afa7-0e3e91c77d57\") " Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.984587 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:07 crc kubenswrapper[4820]: I0201 14:36:07.990356 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-kube-api-access-26j4s" (OuterVolumeSpecName: "kube-api-access-26j4s") pod "36770a7b-a53b-4d5e-afa7-0e3e91c77d57" (UID: "36770a7b-a53b-4d5e-afa7-0e3e91c77d57"). InnerVolumeSpecName "kube-api-access-26j4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.085502 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26j4s\" (UniqueName: \"kubernetes.io/projected/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-kube-api-access-26j4s\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.286552 4820 generic.go:334] "Generic (PLEG): container finished" podID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerID="db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a" exitCode=0 Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.286600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4qds" event={"ID":"36770a7b-a53b-4d5e-afa7-0e3e91c77d57","Type":"ContainerDied","Data":"db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a"} Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.286603 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4qds" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.286644 4820 scope.go:117] "RemoveContainer" containerID="db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.286632 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4qds" event={"ID":"36770a7b-a53b-4d5e-afa7-0e3e91c77d57","Type":"ContainerDied","Data":"b5f1e2846c7a6616d130198e96ffc89cc3ccddb54967c636f3ec117a0cb22197"} Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.301361 4820 scope.go:117] "RemoveContainer" containerID="eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.317017 4820 scope.go:117] "RemoveContainer" containerID="585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.344239 4820 scope.go:117] "RemoveContainer" containerID="db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a" Feb 01 14:36:08 crc kubenswrapper[4820]: E0201 14:36:08.344614 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a\": container with ID starting with db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a not found: ID does not exist" containerID="db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.344651 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a"} err="failed to get container status \"db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a\": rpc error: code = NotFound desc = could not find container \"db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a\": container with ID starting with db488e89158f174c95f4869ae3635601fae25d06355ed12ca24ef33cef70c24a not found: ID does not exist" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.344678 4820 scope.go:117] "RemoveContainer" containerID="eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945" Feb 01 14:36:08 crc kubenswrapper[4820]: E0201 14:36:08.344907 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945\": container with ID starting with eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945 not found: ID does not exist" containerID="eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.344937 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945"} err="failed to get container status \"eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945\": rpc error: code = NotFound desc = could not find container \"eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945\": container with ID starting with eb4c2594cc8f0830f26b4b18f37cb15c9ca82af2539d5a342943635b2d489945 not found: ID does not exist" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.344970 4820 scope.go:117] "RemoveContainer" containerID="585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738" Feb 01 14:36:08 crc kubenswrapper[4820]: E0201 14:36:08.345394 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738\": container with ID starting with 585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738 not found: ID does not exist" containerID="585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.345424 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738"} err="failed to get container status \"585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738\": rpc error: code = NotFound desc = could not find container \"585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738\": container with ID starting with 585aefa23e6b9370d6148cf0f1b073c7263b6cdad67ca3dabe2e4aa3bc463738 not found: ID does not exist" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.400239 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36770a7b-a53b-4d5e-afa7-0e3e91c77d57" (UID: "36770a7b-a53b-4d5e-afa7-0e3e91c77d57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.490356 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36770a7b-a53b-4d5e-afa7-0e3e91c77d57-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.612256 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4qds"] Feb 01 14:36:08 crc kubenswrapper[4820]: I0201 14:36:08.617599 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4qds"] Feb 01 14:36:09 crc kubenswrapper[4820]: I0201 14:36:09.205795 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" path="/var/lib/kubelet/pods/36770a7b-a53b-4d5e-afa7-0e3e91c77d57/volumes" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.937546 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g92fk"] Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.938917 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.938985 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.939052 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerName="extract-utilities" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.939103 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerName="extract-utilities" Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.939188 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.939239 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.939294 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerName="extract-utilities" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.939341 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerName="extract-utilities" Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.939402 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerName="extract-content" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.939456 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerName="extract-content" Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.939515 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerName="extract-content" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.939569 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerName="extract-content" Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.939622 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerName="extract-content" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.939668 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerName="extract-content" Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.939745 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerName="extract-utilities" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.939810 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerName="extract-utilities" Feb 01 14:36:11 crc kubenswrapper[4820]: E0201 14:36:11.939907 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.939982 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.940163 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36770a7b-a53b-4d5e-afa7-0e3e91c77d57" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.940223 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5fd2e4-a1c2-448b-a69b-25fced202a4b" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.940278 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6972bdb0-9870-475e-acc5-2da4ace45e58" containerName="registry-server" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.940987 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.943629 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.943842 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.944087 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.947906 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qqmxc" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.960848 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g92fk"] Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.991161 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g99q9"] Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.992302 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:11 crc kubenswrapper[4820]: I0201 14:36:11.994037 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.006163 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g99q9"] Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.036644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbkm\" (UniqueName: \"kubernetes.io/projected/d0b135d8-126c-4916-8e5d-2008cb2b4790-kube-api-access-rzbkm\") pod \"dnsmasq-dns-675f4bcbfc-g92fk\" (UID: \"d0b135d8-126c-4916-8e5d-2008cb2b4790\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.036689 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b135d8-126c-4916-8e5d-2008cb2b4790-config\") pod \"dnsmasq-dns-675f4bcbfc-g92fk\" (UID: \"d0b135d8-126c-4916-8e5d-2008cb2b4790\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.137556 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-config\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.137663 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzbkm\" (UniqueName: \"kubernetes.io/projected/d0b135d8-126c-4916-8e5d-2008cb2b4790-kube-api-access-rzbkm\") pod \"dnsmasq-dns-675f4bcbfc-g92fk\" (UID: \"d0b135d8-126c-4916-8e5d-2008cb2b4790\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.137692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b135d8-126c-4916-8e5d-2008cb2b4790-config\") pod \"dnsmasq-dns-675f4bcbfc-g92fk\" (UID: \"d0b135d8-126c-4916-8e5d-2008cb2b4790\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.137715 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.137737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5gwd\" (UniqueName: \"kubernetes.io/projected/efca0d94-4689-487b-8cfb-cfc09fe18a07-kube-api-access-c5gwd\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.138969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b135d8-126c-4916-8e5d-2008cb2b4790-config\") pod \"dnsmasq-dns-675f4bcbfc-g92fk\" (UID: \"d0b135d8-126c-4916-8e5d-2008cb2b4790\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.173222 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzbkm\" (UniqueName: \"kubernetes.io/projected/d0b135d8-126c-4916-8e5d-2008cb2b4790-kube-api-access-rzbkm\") pod \"dnsmasq-dns-675f4bcbfc-g92fk\" (UID: \"d0b135d8-126c-4916-8e5d-2008cb2b4790\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.238918 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.238967 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5gwd\" (UniqueName: \"kubernetes.io/projected/efca0d94-4689-487b-8cfb-cfc09fe18a07-kube-api-access-c5gwd\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.239019 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-config\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.239909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-config\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.240561 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.256115 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5gwd\" (UniqueName: \"kubernetes.io/projected/efca0d94-4689-487b-8cfb-cfc09fe18a07-kube-api-access-c5gwd\") pod \"dnsmasq-dns-78dd6ddcc-g99q9\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.261901 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.304854 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.483222 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g92fk"] Feb 01 14:36:12 crc kubenswrapper[4820]: W0201 14:36:12.493727 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0b135d8_126c_4916_8e5d_2008cb2b4790.slice/crio-074e36448f378e07169606c3073010d032053b25e2011ead700454cc27d49cab WatchSource:0}: Error finding container 074e36448f378e07169606c3073010d032053b25e2011ead700454cc27d49cab: Status 404 returned error can't find the container with id 074e36448f378e07169606c3073010d032053b25e2011ead700454cc27d49cab Feb 01 14:36:12 crc kubenswrapper[4820]: I0201 14:36:12.753702 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g99q9"] Feb 01 14:36:13 crc kubenswrapper[4820]: I0201 14:36:13.325112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" event={"ID":"efca0d94-4689-487b-8cfb-cfc09fe18a07","Type":"ContainerStarted","Data":"ea8428e67619baad1f0f067b212c0777126b92e392c2125c1aaa4e110e5cac04"} Feb 01 14:36:13 crc kubenswrapper[4820]: I0201 14:36:13.327691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" event={"ID":"d0b135d8-126c-4916-8e5d-2008cb2b4790","Type":"ContainerStarted","Data":"074e36448f378e07169606c3073010d032053b25e2011ead700454cc27d49cab"} Feb 01 14:36:14 crc kubenswrapper[4820]: I0201 14:36:14.992297 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g92fk"] Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.012765 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-br9mx"] Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.013833 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.044580 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-br9mx"] Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.181326 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-config\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.181391 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-dns-svc\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.181418 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6p2z\" (UniqueName: \"kubernetes.io/projected/205da31d-858f-4e4b-a141-77427f4fa529-kube-api-access-w6p2z\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.286999 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-config\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.287247 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-dns-svc\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.287281 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6p2z\" (UniqueName: \"kubernetes.io/projected/205da31d-858f-4e4b-a141-77427f4fa529-kube-api-access-w6p2z\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.288773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-config\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.289510 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-dns-svc\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.334220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6p2z\" (UniqueName: \"kubernetes.io/projected/205da31d-858f-4e4b-a141-77427f4fa529-kube-api-access-w6p2z\") pod \"dnsmasq-dns-666b6646f7-br9mx\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.347358 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g99q9"] Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.357042 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxdgb"] Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.358846 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.365814 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxdgb"] Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.493947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pxhz\" (UniqueName: \"kubernetes.io/projected/8449fcd6-ef50-459d-9b6e-77c0065e2880-kube-api-access-4pxhz\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.494042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-config\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.494083 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.595503 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pxhz\" (UniqueName: \"kubernetes.io/projected/8449fcd6-ef50-459d-9b6e-77c0065e2880-kube-api-access-4pxhz\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.595578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-config\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.595626 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.596473 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.596646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-config\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.613294 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pxhz\" (UniqueName: \"kubernetes.io/projected/8449fcd6-ef50-459d-9b6e-77c0065e2880-kube-api-access-4pxhz\") pod \"dnsmasq-dns-57d769cc4f-hxdgb\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.630387 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:15 crc kubenswrapper[4820]: I0201 14:36:15.679240 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.111681 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-br9mx"] Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.179303 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.180368 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.182229 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.187118 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l6rfn" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.187162 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.187216 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.189442 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.189582 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.190992 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.207649 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxdgb"] Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.239214 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.307631 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9cdc675-a849-4f24-bca1-ea5c04c55b52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308041 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308121 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-config-data\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308183 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308209 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzsg\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-kube-api-access-kqzsg\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308225 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308252 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308676 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.308740 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9cdc675-a849-4f24-bca1-ea5c04c55b52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410049 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqzsg\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-kube-api-access-kqzsg\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410094 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410129 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410148 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9cdc675-a849-4f24-bca1-ea5c04c55b52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410231 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9cdc675-a849-4f24-bca1-ea5c04c55b52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410274 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410329 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-config-data\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.410345 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.411630 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.411814 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.412327 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.414224 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.414344 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-config-data\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.414566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.415957 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.415978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9cdc675-a849-4f24-bca1-ea5c04c55b52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.417202 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9cdc675-a849-4f24-bca1-ea5c04c55b52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.421547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.432372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqzsg\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-kube-api-access-kqzsg\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.434921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.463085 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.464308 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.468466 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wpl94" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.470752 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.471070 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.471252 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.471368 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.471487 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.471501 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.473262 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.507912 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620054 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620103 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c49d65b-e444-406e-8b45-e95ba6bbb52b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620187 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620236 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c49d65b-e444-406e-8b45-e95ba6bbb52b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620294 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620357 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620420 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.620496 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vct5w\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-kube-api-access-vct5w\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vct5w\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-kube-api-access-vct5w\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c49d65b-e444-406e-8b45-e95ba6bbb52b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722620 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c49d65b-e444-406e-8b45-e95ba6bbb52b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722682 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722718 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722741 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722762 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.722983 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.723480 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.723619 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.723711 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.723758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.723846 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.728981 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.735795 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c49d65b-e444-406e-8b45-e95ba6bbb52b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.748752 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vct5w\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-kube-api-access-vct5w\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.749015 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.760299 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c49d65b-e444-406e-8b45-e95ba6bbb52b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.785101 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:16 crc kubenswrapper[4820]: I0201 14:36:16.809314 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.621650 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.623041 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.630207 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.630415 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.631469 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-wxzsf" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.631485 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.634220 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.644744 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.743182 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.743530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-kolla-config\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.743562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k95nn\" (UniqueName: \"kubernetes.io/projected/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-kube-api-access-k95nn\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.743590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.743608 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.743647 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.743680 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.743700 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-config-data-default\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.844754 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-kolla-config\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.844800 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k95nn\" (UniqueName: \"kubernetes.io/projected/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-kube-api-access-k95nn\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.844835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.844856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.844906 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.844941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.844964 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-config-data-default\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.844991 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.845938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.846555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-kolla-config\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.846855 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.851703 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.852377 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-config-data-default\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.853771 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.857940 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.863545 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k95nn\" (UniqueName: \"kubernetes.io/projected/e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b-kube-api-access-k95nn\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.867123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b\") " pod="openstack/openstack-galera-0" Feb 01 14:36:17 crc kubenswrapper[4820]: I0201 14:36:17.947973 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.166487 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.167706 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.172990 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rvt2j" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.173357 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.173500 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.181233 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.191182 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.243374 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.243432 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.263964 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/530f8225-6e15-4177-9b64-24e4f767f6c5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.264228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.264368 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.264470 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530f8225-6e15-4177-9b64-24e4f767f6c5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.264585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.264660 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/530f8225-6e15-4177-9b64-24e4f767f6c5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.264744 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl8jt\" (UniqueName: \"kubernetes.io/projected/530f8225-6e15-4177-9b64-24e4f767f6c5-kube-api-access-zl8jt\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.264817 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: W0201 14:36:19.347756 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8449fcd6_ef50_459d_9b6e_77c0065e2880.slice/crio-e6f7288324f47e514c5362d1214901f64daf30c79d5fa282861f9fac00d4eeac WatchSource:0}: Error finding container e6f7288324f47e514c5362d1214901f64daf30c79d5fa282861f9fac00d4eeac: Status 404 returned error can't find the container with id e6f7288324f47e514c5362d1214901f64daf30c79d5fa282861f9fac00d4eeac Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.366652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/530f8225-6e15-4177-9b64-24e4f767f6c5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367236 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367328 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367367 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530f8225-6e15-4177-9b64-24e4f767f6c5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367401 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/530f8225-6e15-4177-9b64-24e4f767f6c5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl8jt\" (UniqueName: \"kubernetes.io/projected/530f8225-6e15-4177-9b64-24e4f767f6c5-kube-api-access-zl8jt\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367788 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367850 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/530f8225-6e15-4177-9b64-24e4f767f6c5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.367918 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.368617 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.368982 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530f8225-6e15-4177-9b64-24e4f767f6c5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.374501 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/530f8225-6e15-4177-9b64-24e4f767f6c5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.382646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530f8225-6e15-4177-9b64-24e4f767f6c5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.426378 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl8jt\" (UniqueName: \"kubernetes.io/projected/530f8225-6e15-4177-9b64-24e4f767f6c5-kube-api-access-zl8jt\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.431436 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" event={"ID":"8449fcd6-ef50-459d-9b6e-77c0065e2880","Type":"ContainerStarted","Data":"e6f7288324f47e514c5362d1214901f64daf30c79d5fa282861f9fac00d4eeac"} Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.445543 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"530f8225-6e15-4177-9b64-24e4f767f6c5\") " pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.450462 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" event={"ID":"205da31d-858f-4e4b-a141-77427f4fa529","Type":"ContainerStarted","Data":"f92a5063c454bb66e7f135cc8a7e7071f7112c1ca33cfbcfa8f42791260e4b03"} Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.516110 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.517137 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.518462 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.519809 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vdb4v" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.520056 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.520720 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.541352 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.569964 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-kolla-config\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.570358 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-config-data\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.570573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.570681 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.570851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pwp\" (UniqueName: \"kubernetes.io/projected/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-kube-api-access-q7pwp\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.672801 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-kolla-config\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.672854 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-config-data\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.672892 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.672925 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.672943 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7pwp\" (UniqueName: \"kubernetes.io/projected/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-kube-api-access-q7pwp\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.673578 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-kolla-config\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.673816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-config-data\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.675862 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.675891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.690864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7pwp\" (UniqueName: \"kubernetes.io/projected/aa35f7d5-c06d-4d2e-8806-c6e638c9db02-kube-api-access-q7pwp\") pod \"memcached-0\" (UID: \"aa35f7d5-c06d-4d2e-8806-c6e638c9db02\") " pod="openstack/memcached-0" Feb 01 14:36:19 crc kubenswrapper[4820]: I0201 14:36:19.847629 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 01 14:36:21 crc kubenswrapper[4820]: I0201 14:36:21.275179 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:36:21 crc kubenswrapper[4820]: I0201 14:36:21.276609 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 01 14:36:21 crc kubenswrapper[4820]: I0201 14:36:21.278749 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vxplh" Feb 01 14:36:21 crc kubenswrapper[4820]: I0201 14:36:21.291024 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:36:21 crc kubenswrapper[4820]: I0201 14:36:21.404665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srtkx\" (UniqueName: \"kubernetes.io/projected/834dff46-eb6f-4646-830a-a665bcd1461b-kube-api-access-srtkx\") pod \"kube-state-metrics-0\" (UID: \"834dff46-eb6f-4646-830a-a665bcd1461b\") " pod="openstack/kube-state-metrics-0" Feb 01 14:36:21 crc kubenswrapper[4820]: I0201 14:36:21.505568 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srtkx\" (UniqueName: \"kubernetes.io/projected/834dff46-eb6f-4646-830a-a665bcd1461b-kube-api-access-srtkx\") pod \"kube-state-metrics-0\" (UID: \"834dff46-eb6f-4646-830a-a665bcd1461b\") " pod="openstack/kube-state-metrics-0" Feb 01 14:36:21 crc kubenswrapper[4820]: I0201 14:36:21.535153 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srtkx\" (UniqueName: \"kubernetes.io/projected/834dff46-eb6f-4646-830a-a665bcd1461b-kube-api-access-srtkx\") pod \"kube-state-metrics-0\" (UID: \"834dff46-eb6f-4646-830a-a665bcd1461b\") " pod="openstack/kube-state-metrics-0" Feb 01 14:36:21 crc kubenswrapper[4820]: I0201 14:36:21.592522 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.764528 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wrqhs"] Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.765924 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.768187 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5r5xn" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.768450 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.776089 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rqms2"] Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.777638 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.780257 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.789938 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wrqhs"] Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.804601 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rqms2"] Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5bb\" (UniqueName: \"kubernetes.io/projected/0badf713-2cde-439b-8a0e-c2eedac05b99-kube-api-access-qj5bb\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877269 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-log\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0badf713-2cde-439b-8a0e-c2eedac05b99-combined-ca-bundle\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877306 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-lib\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877334 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0badf713-2cde-439b-8a0e-c2eedac05b99-scripts\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877350 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-run\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-run-ovn\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877425 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-etc-ovs\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877451 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75447ae7-687e-45f6-925f-7091cd5c8930-scripts\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877544 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-run\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877591 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x757z\" (UniqueName: \"kubernetes.io/projected/75447ae7-687e-45f6-925f-7091cd5c8930-kube-api-access-x757z\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877613 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0badf713-2cde-439b-8a0e-c2eedac05b99-ovn-controller-tls-certs\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.877627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-log-ovn\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978608 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-run-ovn\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978689 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-etc-ovs\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978727 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75447ae7-687e-45f6-925f-7091cd5c8930-scripts\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-run\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x757z\" (UniqueName: \"kubernetes.io/projected/75447ae7-687e-45f6-925f-7091cd5c8930-kube-api-access-x757z\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978827 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0badf713-2cde-439b-8a0e-c2eedac05b99-ovn-controller-tls-certs\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978848 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-log-ovn\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978898 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5bb\" (UniqueName: \"kubernetes.io/projected/0badf713-2cde-439b-8a0e-c2eedac05b99-kube-api-access-qj5bb\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978921 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-log\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978943 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0badf713-2cde-439b-8a0e-c2eedac05b99-combined-ca-bundle\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.978963 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-lib\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.979001 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0badf713-2cde-439b-8a0e-c2eedac05b99-scripts\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.979019 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-run\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.979461 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-run-ovn\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.979972 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-log\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.980117 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-log-ovn\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.980123 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-lib\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.980124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-etc-ovs\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.980248 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/75447ae7-687e-45f6-925f-7091cd5c8930-var-run\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.980267 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0badf713-2cde-439b-8a0e-c2eedac05b99-var-run\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.982216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0badf713-2cde-439b-8a0e-c2eedac05b99-scripts\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.982316 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75447ae7-687e-45f6-925f-7091cd5c8930-scripts\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.985134 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0badf713-2cde-439b-8a0e-c2eedac05b99-combined-ca-bundle\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.985659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0badf713-2cde-439b-8a0e-c2eedac05b99-ovn-controller-tls-certs\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.994155 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x757z\" (UniqueName: \"kubernetes.io/projected/75447ae7-687e-45f6-925f-7091cd5c8930-kube-api-access-x757z\") pod \"ovn-controller-ovs-rqms2\" (UID: \"75447ae7-687e-45f6-925f-7091cd5c8930\") " pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:25 crc kubenswrapper[4820]: I0201 14:36:25.994343 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5bb\" (UniqueName: \"kubernetes.io/projected/0badf713-2cde-439b-8a0e-c2eedac05b99-kube-api-access-qj5bb\") pod \"ovn-controller-wrqhs\" (UID: \"0badf713-2cde-439b-8a0e-c2eedac05b99\") " pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.096265 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.106393 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.774773 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.775997 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.777758 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.778033 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.778969 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cmpst" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.779129 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.779217 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.798126 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.895196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.895240 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48888a7d-f908-4f3c-9335-d2ebe8e19690-config\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.895269 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.895291 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.895323 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48888a7d-f908-4f3c-9335-d2ebe8e19690-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.895389 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.895456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48888a7d-f908-4f3c-9335-d2ebe8e19690-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.895476 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lrm2\" (UniqueName: \"kubernetes.io/projected/48888a7d-f908-4f3c-9335-d2ebe8e19690-kube-api-access-7lrm2\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.996463 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.996509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48888a7d-f908-4f3c-9335-d2ebe8e19690-config\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.996535 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.996556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.996582 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48888a7d-f908-4f3c-9335-d2ebe8e19690-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.996598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.996650 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48888a7d-f908-4f3c-9335-d2ebe8e19690-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.996666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lrm2\" (UniqueName: \"kubernetes.io/projected/48888a7d-f908-4f3c-9335-d2ebe8e19690-kube-api-access-7lrm2\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.997086 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.997654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48888a7d-f908-4f3c-9335-d2ebe8e19690-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.998421 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48888a7d-f908-4f3c-9335-d2ebe8e19690-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:26 crc kubenswrapper[4820]: I0201 14:36:26.998427 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48888a7d-f908-4f3c-9335-d2ebe8e19690-config\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.001779 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.005649 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.008697 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48888a7d-f908-4f3c-9335-d2ebe8e19690-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.017670 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.025745 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lrm2\" (UniqueName: \"kubernetes.io/projected/48888a7d-f908-4f3c-9335-d2ebe8e19690-kube-api-access-7lrm2\") pod \"ovsdbserver-nb-0\" (UID: \"48888a7d-f908-4f3c-9335-d2ebe8e19690\") " pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.093063 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.781209 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.782649 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.784770 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.784955 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.785330 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9njfq" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.785423 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.794142 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.910216 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584885db-41e8-4667-96bd-3f180ac41ae4-config\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.910305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/584885db-41e8-4667-96bd-3f180ac41ae4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.910347 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.910395 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.910439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mms6n\" (UniqueName: \"kubernetes.io/projected/584885db-41e8-4667-96bd-3f180ac41ae4-kube-api-access-mms6n\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.910473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.910530 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/584885db-41e8-4667-96bd-3f180ac41ae4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:27 crc kubenswrapper[4820]: I0201 14:36:27.910572 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.012233 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mms6n\" (UniqueName: \"kubernetes.io/projected/584885db-41e8-4667-96bd-3f180ac41ae4-kube-api-access-mms6n\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.012524 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.012552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/584885db-41e8-4667-96bd-3f180ac41ae4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.012593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.012631 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584885db-41e8-4667-96bd-3f180ac41ae4-config\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.012664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/584885db-41e8-4667-96bd-3f180ac41ae4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.012685 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.012720 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.014029 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/584885db-41e8-4667-96bd-3f180ac41ae4-config\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.014353 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/584885db-41e8-4667-96bd-3f180ac41ae4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.014505 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.017090 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.017925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/584885db-41e8-4667-96bd-3f180ac41ae4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.019416 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.025616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/584885db-41e8-4667-96bd-3f180ac41ae4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.043402 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.047086 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mms6n\" (UniqueName: \"kubernetes.io/projected/584885db-41e8-4667-96bd-3f180ac41ae4-kube-api-access-mms6n\") pod \"ovsdbserver-sb-0\" (UID: \"584885db-41e8-4667-96bd-3f180ac41ae4\") " pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.105598 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.382962 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:36:28 crc kubenswrapper[4820]: W0201 14:36:28.872483 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9cdc675_a849_4f24_bca1_ea5c04c55b52.slice/crio-3623316050aeeb0566bce48e2e10ec8f59b074adf549d3b6cd4854bc4e266e8f WatchSource:0}: Error finding container 3623316050aeeb0566bce48e2e10ec8f59b074adf549d3b6cd4854bc4e266e8f: Status 404 returned error can't find the container with id 3623316050aeeb0566bce48e2e10ec8f59b074adf549d3b6cd4854bc4e266e8f Feb 01 14:36:28 crc kubenswrapper[4820]: E0201 14:36:28.875761 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 01 14:36:28 crc kubenswrapper[4820]: E0201 14:36:28.875908 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5gwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-g99q9_openstack(efca0d94-4689-487b-8cfb-cfc09fe18a07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 14:36:28 crc kubenswrapper[4820]: E0201 14:36:28.877276 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" podUID="efca0d94-4689-487b-8cfb-cfc09fe18a07" Feb 01 14:36:28 crc kubenswrapper[4820]: I0201 14:36:28.879359 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 14:36:28 crc kubenswrapper[4820]: E0201 14:36:28.920193 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 01 14:36:28 crc kubenswrapper[4820]: E0201 14:36:28.920342 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzbkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-g92fk_openstack(d0b135d8-126c-4916-8e5d-2008cb2b4790): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 14:36:28 crc kubenswrapper[4820]: E0201 14:36:28.922077 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" podUID="d0b135d8-126c-4916-8e5d-2008cb2b4790" Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.304823 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.528170 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c49d65b-e444-406e-8b45-e95ba6bbb52b","Type":"ContainerStarted","Data":"d6522310a7fbc00eacd33e5fe9e9971a955c53c810766d4dbdedafab4758d7f7"} Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.529308 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.529545 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e9cdc675-a849-4f24-bca1-ea5c04c55b52","Type":"ContainerStarted","Data":"3623316050aeeb0566bce48e2e10ec8f59b074adf549d3b6cd4854bc4e266e8f"} Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.531009 4820 generic.go:334] "Generic (PLEG): container finished" podID="205da31d-858f-4e4b-a141-77427f4fa529" containerID="6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43" exitCode=0 Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.531834 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" event={"ID":"205da31d-858f-4e4b-a141-77427f4fa529","Type":"ContainerDied","Data":"6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43"} Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.542040 4820 generic.go:334] "Generic (PLEG): container finished" podID="8449fcd6-ef50-459d-9b6e-77c0065e2880" containerID="663f45eea3857f24e11446098fcdcad0e2edf271e93a280a03c1c49e72af94dc" exitCode=0 Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.543003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" event={"ID":"8449fcd6-ef50-459d-9b6e-77c0065e2880","Type":"ContainerDied","Data":"663f45eea3857f24e11446098fcdcad0e2edf271e93a280a03c1c49e72af94dc"} Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.564623 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 01 14:36:29 crc kubenswrapper[4820]: W0201 14:36:29.567434 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3c7d7c1_20b6_4370_a0ae_da76c3b59c3b.slice/crio-14e0e05ee02fae49c409297cffc2a7d5bbb0566405d35e1d988bdfa43b879082 WatchSource:0}: Error finding container 14e0e05ee02fae49c409297cffc2a7d5bbb0566405d35e1d988bdfa43b879082: Status 404 returned error can't find the container with id 14e0e05ee02fae49c409297cffc2a7d5bbb0566405d35e1d988bdfa43b879082 Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.728921 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.960570 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:29 crc kubenswrapper[4820]: I0201 14:36:29.965370 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.007705 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.018081 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.050753 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-config\") pod \"efca0d94-4689-487b-8cfb-cfc09fe18a07\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.050984 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5gwd\" (UniqueName: \"kubernetes.io/projected/efca0d94-4689-487b-8cfb-cfc09fe18a07-kube-api-access-c5gwd\") pod \"efca0d94-4689-487b-8cfb-cfc09fe18a07\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.051060 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-dns-svc\") pod \"efca0d94-4689-487b-8cfb-cfc09fe18a07\" (UID: \"efca0d94-4689-487b-8cfb-cfc09fe18a07\") " Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.051185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzbkm\" (UniqueName: \"kubernetes.io/projected/d0b135d8-126c-4916-8e5d-2008cb2b4790-kube-api-access-rzbkm\") pod \"d0b135d8-126c-4916-8e5d-2008cb2b4790\" (UID: \"d0b135d8-126c-4916-8e5d-2008cb2b4790\") " Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.051227 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b135d8-126c-4916-8e5d-2008cb2b4790-config\") pod \"d0b135d8-126c-4916-8e5d-2008cb2b4790\" (UID: \"d0b135d8-126c-4916-8e5d-2008cb2b4790\") " Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.051535 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-config" (OuterVolumeSpecName: "config") pod "efca0d94-4689-487b-8cfb-cfc09fe18a07" (UID: "efca0d94-4689-487b-8cfb-cfc09fe18a07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.051842 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.051917 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "efca0d94-4689-487b-8cfb-cfc09fe18a07" (UID: "efca0d94-4689-487b-8cfb-cfc09fe18a07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.052305 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b135d8-126c-4916-8e5d-2008cb2b4790-config" (OuterVolumeSpecName: "config") pod "d0b135d8-126c-4916-8e5d-2008cb2b4790" (UID: "d0b135d8-126c-4916-8e5d-2008cb2b4790"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.057201 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b135d8-126c-4916-8e5d-2008cb2b4790-kube-api-access-rzbkm" (OuterVolumeSpecName: "kube-api-access-rzbkm") pod "d0b135d8-126c-4916-8e5d-2008cb2b4790" (UID: "d0b135d8-126c-4916-8e5d-2008cb2b4790"). InnerVolumeSpecName "kube-api-access-rzbkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.058703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efca0d94-4689-487b-8cfb-cfc09fe18a07-kube-api-access-c5gwd" (OuterVolumeSpecName: "kube-api-access-c5gwd") pod "efca0d94-4689-487b-8cfb-cfc09fe18a07" (UID: "efca0d94-4689-487b-8cfb-cfc09fe18a07"). InnerVolumeSpecName "kube-api-access-c5gwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.072524 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wrqhs"] Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.110100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rqms2"] Feb 01 14:36:30 crc kubenswrapper[4820]: W0201 14:36:30.114117 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75447ae7_687e_45f6_925f_7091cd5c8930.slice/crio-ad944b0fe4ee0358d603fca9d33fc07c052edabe3b1433ac56a2d10fe332d1a1 WatchSource:0}: Error finding container ad944b0fe4ee0358d603fca9d33fc07c052edabe3b1433ac56a2d10fe332d1a1: Status 404 returned error can't find the container with id ad944b0fe4ee0358d603fca9d33fc07c052edabe3b1433ac56a2d10fe332d1a1 Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.153465 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzbkm\" (UniqueName: \"kubernetes.io/projected/d0b135d8-126c-4916-8e5d-2008cb2b4790-kube-api-access-rzbkm\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.153495 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b135d8-126c-4916-8e5d-2008cb2b4790-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.153505 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5gwd\" (UniqueName: \"kubernetes.io/projected/efca0d94-4689-487b-8cfb-cfc09fe18a07-kube-api-access-c5gwd\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.153513 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efca0d94-4689-487b-8cfb-cfc09fe18a07-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.203852 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 01 14:36:30 crc kubenswrapper[4820]: W0201 14:36:30.214859 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48888a7d_f908_4f3c_9335_d2ebe8e19690.slice/crio-c792f4ba9cf2d9647958542dc8d26dd48db3691eba567941f36b42ba1b2478d0 WatchSource:0}: Error finding container c792f4ba9cf2d9647958542dc8d26dd48db3691eba567941f36b42ba1b2478d0: Status 404 returned error can't find the container with id c792f4ba9cf2d9647958542dc8d26dd48db3691eba567941f36b42ba1b2478d0 Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.551136 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.551180 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g99q9" event={"ID":"efca0d94-4689-487b-8cfb-cfc09fe18a07","Type":"ContainerDied","Data":"ea8428e67619baad1f0f067b212c0777126b92e392c2125c1aaa4e110e5cac04"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.553580 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"aa35f7d5-c06d-4d2e-8806-c6e638c9db02","Type":"ContainerStarted","Data":"290b44e41452f0c02e7271e466d7cbdd634077f19245cfa008c4d10fa31fe73f"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.554957 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b","Type":"ContainerStarted","Data":"14e0e05ee02fae49c409297cffc2a7d5bbb0566405d35e1d988bdfa43b879082"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.562392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rqms2" event={"ID":"75447ae7-687e-45f6-925f-7091cd5c8930","Type":"ContainerStarted","Data":"ad944b0fe4ee0358d603fca9d33fc07c052edabe3b1433ac56a2d10fe332d1a1"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.572299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"48888a7d-f908-4f3c-9335-d2ebe8e19690","Type":"ContainerStarted","Data":"c792f4ba9cf2d9647958542dc8d26dd48db3691eba567941f36b42ba1b2478d0"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.574355 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" event={"ID":"8449fcd6-ef50-459d-9b6e-77c0065e2880","Type":"ContainerStarted","Data":"f012224863bbd8454dd125e7c76568a19a7ac7208e2e89e9bd0c5a0f6aeb7006"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.575320 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.578411 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"530f8225-6e15-4177-9b64-24e4f767f6c5","Type":"ContainerStarted","Data":"c4e88bf0eaa088aea5c05fe28c709996c1629f2bbdbc9f703deb2f3bcd30d4a2"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.582290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" event={"ID":"205da31d-858f-4e4b-a141-77427f4fa529","Type":"ContainerStarted","Data":"137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.583076 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.584302 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" event={"ID":"d0b135d8-126c-4916-8e5d-2008cb2b4790","Type":"ContainerDied","Data":"074e36448f378e07169606c3073010d032053b25e2011ead700454cc27d49cab"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.584350 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g92fk" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.586252 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wrqhs" event={"ID":"0badf713-2cde-439b-8a0e-c2eedac05b99","Type":"ContainerStarted","Data":"fdb236064c7a10b92335322cef4de8eecf3eec895720656fc39101875fa1988a"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.587342 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"834dff46-eb6f-4646-830a-a665bcd1461b","Type":"ContainerStarted","Data":"ecb263a3f5b10c23831e4ee7fd54def8a4d99d2a95fc6de0d0c90daa08713bec"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.587996 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"584885db-41e8-4667-96bd-3f180ac41ae4","Type":"ContainerStarted","Data":"1e3614e913f0e14cd93426151d6e364e6f3e4c71f06e7e3219b1e99a9983d5f9"} Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.603076 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" podStartSLOduration=5.967412589 podStartE2EDuration="15.603033606s" podCreationTimestamp="2026-02-01 14:36:15 +0000 UTC" firstStartedPulling="2026-02-01 14:36:19.352800609 +0000 UTC m=+920.873166893" lastFinishedPulling="2026-02-01 14:36:28.988421616 +0000 UTC m=+930.508787910" observedRunningTime="2026-02-01 14:36:30.593624217 +0000 UTC m=+932.113990501" watchObservedRunningTime="2026-02-01 14:36:30.603033606 +0000 UTC m=+932.123399890" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.618337 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" podStartSLOduration=6.969559713 podStartE2EDuration="16.618320098s" podCreationTimestamp="2026-02-01 14:36:14 +0000 UTC" firstStartedPulling="2026-02-01 14:36:19.347352287 +0000 UTC m=+920.867718571" lastFinishedPulling="2026-02-01 14:36:28.996112672 +0000 UTC m=+930.516478956" observedRunningTime="2026-02-01 14:36:30.609077883 +0000 UTC m=+932.129444167" watchObservedRunningTime="2026-02-01 14:36:30.618320098 +0000 UTC m=+932.138686382" Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.666230 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g99q9"] Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.671819 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g99q9"] Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.688830 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g92fk"] Feb 01 14:36:30 crc kubenswrapper[4820]: I0201 14:36:30.688896 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g92fk"] Feb 01 14:36:31 crc kubenswrapper[4820]: I0201 14:36:31.213646 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b135d8-126c-4916-8e5d-2008cb2b4790" path="/var/lib/kubelet/pods/d0b135d8-126c-4916-8e5d-2008cb2b4790/volumes" Feb 01 14:36:31 crc kubenswrapper[4820]: I0201 14:36:31.214048 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efca0d94-4689-487b-8cfb-cfc09fe18a07" path="/var/lib/kubelet/pods/efca0d94-4689-487b-8cfb-cfc09fe18a07/volumes" Feb 01 14:36:35 crc kubenswrapper[4820]: I0201 14:36:35.632855 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:35 crc kubenswrapper[4820]: I0201 14:36:35.680643 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:35 crc kubenswrapper[4820]: I0201 14:36:35.730695 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-br9mx"] Feb 01 14:36:36 crc kubenswrapper[4820]: I0201 14:36:36.629689 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" podUID="205da31d-858f-4e4b-a141-77427f4fa529" containerName="dnsmasq-dns" containerID="cri-o://137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e" gracePeriod=10 Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.488718 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.531970 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-l4qhs"] Feb 01 14:36:37 crc kubenswrapper[4820]: E0201 14:36:37.532336 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205da31d-858f-4e4b-a141-77427f4fa529" containerName="init" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.532360 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="205da31d-858f-4e4b-a141-77427f4fa529" containerName="init" Feb 01 14:36:37 crc kubenswrapper[4820]: E0201 14:36:37.532389 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205da31d-858f-4e4b-a141-77427f4fa529" containerName="dnsmasq-dns" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.532397 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="205da31d-858f-4e4b-a141-77427f4fa529" containerName="dnsmasq-dns" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.532566 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="205da31d-858f-4e4b-a141-77427f4fa529" containerName="dnsmasq-dns" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.533164 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.538983 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.548976 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-l4qhs"] Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.578663 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6p2z\" (UniqueName: \"kubernetes.io/projected/205da31d-858f-4e4b-a141-77427f4fa529-kube-api-access-w6p2z\") pod \"205da31d-858f-4e4b-a141-77427f4fa529\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.578725 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-dns-svc\") pod \"205da31d-858f-4e4b-a141-77427f4fa529\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.578754 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-config\") pod \"205da31d-858f-4e4b-a141-77427f4fa529\" (UID: \"205da31d-858f-4e4b-a141-77427f4fa529\") " Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.579071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e613c7d2-9b6b-4e3c-93d7-617b826931c7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.579104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbw8\" (UniqueName: \"kubernetes.io/projected/e613c7d2-9b6b-4e3c-93d7-617b826931c7-kube-api-access-5bbw8\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.579132 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e613c7d2-9b6b-4e3c-93d7-617b826931c7-config\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.579175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e613c7d2-9b6b-4e3c-93d7-617b826931c7-ovs-rundir\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.579222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e613c7d2-9b6b-4e3c-93d7-617b826931c7-combined-ca-bundle\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.579248 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e613c7d2-9b6b-4e3c-93d7-617b826931c7-ovn-rundir\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.641229 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205da31d-858f-4e4b-a141-77427f4fa529-kube-api-access-w6p2z" (OuterVolumeSpecName: "kube-api-access-w6p2z") pod "205da31d-858f-4e4b-a141-77427f4fa529" (UID: "205da31d-858f-4e4b-a141-77427f4fa529"). InnerVolumeSpecName "kube-api-access-w6p2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.645209 4820 generic.go:334] "Generic (PLEG): container finished" podID="205da31d-858f-4e4b-a141-77427f4fa529" containerID="137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e" exitCode=0 Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.645270 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" event={"ID":"205da31d-858f-4e4b-a141-77427f4fa529","Type":"ContainerDied","Data":"137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e"} Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.645340 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" event={"ID":"205da31d-858f-4e4b-a141-77427f4fa529","Type":"ContainerDied","Data":"f92a5063c454bb66e7f135cc8a7e7071f7112c1ca33cfbcfa8f42791260e4b03"} Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.645359 4820 scope.go:117] "RemoveContainer" containerID="137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.645509 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-br9mx" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.678795 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "205da31d-858f-4e4b-a141-77427f4fa529" (UID: "205da31d-858f-4e4b-a141-77427f4fa529"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.695375 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ssq2s"] Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.696617 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.699513 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.699547 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e613c7d2-9b6b-4e3c-93d7-617b826931c7-ovn-rundir\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.699756 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e613c7d2-9b6b-4e3c-93d7-617b826931c7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.699826 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbw8\" (UniqueName: \"kubernetes.io/projected/e613c7d2-9b6b-4e3c-93d7-617b826931c7-kube-api-access-5bbw8\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.699899 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e613c7d2-9b6b-4e3c-93d7-617b826931c7-config\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.699944 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e613c7d2-9b6b-4e3c-93d7-617b826931c7-ovn-rundir\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.700000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e613c7d2-9b6b-4e3c-93d7-617b826931c7-ovs-rundir\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.700046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e613c7d2-9b6b-4e3c-93d7-617b826931c7-combined-ca-bundle\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.700165 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6p2z\" (UniqueName: \"kubernetes.io/projected/205da31d-858f-4e4b-a141-77427f4fa529-kube-api-access-w6p2z\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.700190 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.700543 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e613c7d2-9b6b-4e3c-93d7-617b826931c7-ovs-rundir\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.704595 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e613c7d2-9b6b-4e3c-93d7-617b826931c7-config\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.706272 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e613c7d2-9b6b-4e3c-93d7-617b826931c7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.710238 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ssq2s"] Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.712771 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-config" (OuterVolumeSpecName: "config") pod "205da31d-858f-4e4b-a141-77427f4fa529" (UID: "205da31d-858f-4e4b-a141-77427f4fa529"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.714026 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e613c7d2-9b6b-4e3c-93d7-617b826931c7-combined-ca-bundle\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.719171 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbw8\" (UniqueName: \"kubernetes.io/projected/e613c7d2-9b6b-4e3c-93d7-617b826931c7-kube-api-access-5bbw8\") pod \"ovn-controller-metrics-l4qhs\" (UID: \"e613c7d2-9b6b-4e3c-93d7-617b826931c7\") " pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.801794 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-config\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.802202 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.802231 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdr4\" (UniqueName: \"kubernetes.io/projected/ee497037-4f55-4f46-87c8-959a7a70a944-kube-api-access-8jdr4\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.802301 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.802412 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/205da31d-858f-4e4b-a141-77427f4fa529-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.802568 4820 scope.go:117] "RemoveContainer" containerID="6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.866299 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l4qhs" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.903910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-config\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.903970 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.903992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jdr4\" (UniqueName: \"kubernetes.io/projected/ee497037-4f55-4f46-87c8-959a7a70a944-kube-api-access-8jdr4\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.904046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.904817 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.905352 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-config\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.905899 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.926122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jdr4\" (UniqueName: \"kubernetes.io/projected/ee497037-4f55-4f46-87c8-959a7a70a944-kube-api-access-8jdr4\") pod \"dnsmasq-dns-5bf47b49b7-ssq2s\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.982862 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-br9mx"] Feb 01 14:36:37 crc kubenswrapper[4820]: I0201 14:36:37.995849 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-br9mx"] Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.005150 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ssq2s"] Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.010936 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.065917 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-wxbt2"] Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.068041 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.070746 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.081326 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wxbt2"] Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.127073 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-config\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.127125 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-dns-svc\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.127163 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26v2\" (UniqueName: \"kubernetes.io/projected/66252276-3151-4fc8-accd-e6f036d64ba5-kube-api-access-z26v2\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.127201 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.127234 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.228786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-config\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.228841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-dns-svc\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.228869 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z26v2\" (UniqueName: \"kubernetes.io/projected/66252276-3151-4fc8-accd-e6f036d64ba5-kube-api-access-z26v2\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.228917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.228944 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.229755 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.230327 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.230612 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-dns-svc\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.232904 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-config\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.350990 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z26v2\" (UniqueName: \"kubernetes.io/projected/66252276-3151-4fc8-accd-e6f036d64ba5-kube-api-access-z26v2\") pod \"dnsmasq-dns-8554648995-wxbt2\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.466109 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.570108 4820 scope.go:117] "RemoveContainer" containerID="137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e" Feb 01 14:36:38 crc kubenswrapper[4820]: E0201 14:36:38.570479 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e\": container with ID starting with 137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e not found: ID does not exist" containerID="137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.570519 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e"} err="failed to get container status \"137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e\": rpc error: code = NotFound desc = could not find container \"137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e\": container with ID starting with 137c40c2965ba549311283d0b6924302f65d2e2e790f9b9c9d7db955bbb7c15e not found: ID does not exist" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.570546 4820 scope.go:117] "RemoveContainer" containerID="6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43" Feb 01 14:36:38 crc kubenswrapper[4820]: E0201 14:36:38.570914 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43\": container with ID starting with 6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43 not found: ID does not exist" containerID="6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.570941 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43"} err="failed to get container status \"6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43\": rpc error: code = NotFound desc = could not find container \"6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43\": container with ID starting with 6750546be786ce34e9494c8487d420611490c5b5d49b8d525444012682646b43 not found: ID does not exist" Feb 01 14:36:38 crc kubenswrapper[4820]: I0201 14:36:38.652203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e9cdc675-a849-4f24-bca1-ea5c04c55b52","Type":"ContainerStarted","Data":"8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:38.999853 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-l4qhs"] Feb 01 14:36:39 crc kubenswrapper[4820]: W0201 14:36:39.029580 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode613c7d2_9b6b_4e3c_93d7_617b826931c7.slice/crio-a5596bed9e7c7a05b89d39f45a0889065350f84f1a7a5a34600b42a4b26ada21 WatchSource:0}: Error finding container a5596bed9e7c7a05b89d39f45a0889065350f84f1a7a5a34600b42a4b26ada21: Status 404 returned error can't find the container with id a5596bed9e7c7a05b89d39f45a0889065350f84f1a7a5a34600b42a4b26ada21 Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.109339 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wxbt2"] Feb 01 14:36:39 crc kubenswrapper[4820]: W0201 14:36:39.115960 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66252276_3151_4fc8_accd_e6f036d64ba5.slice/crio-55b2fe5c79a8716fd7c67a2a57531c6c4a6bb7fa6cc30e22e046b34d4ea23e54 WatchSource:0}: Error finding container 55b2fe5c79a8716fd7c67a2a57531c6c4a6bb7fa6cc30e22e046b34d4ea23e54: Status 404 returned error can't find the container with id 55b2fe5c79a8716fd7c67a2a57531c6c4a6bb7fa6cc30e22e046b34d4ea23e54 Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.211902 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205da31d-858f-4e4b-a141-77427f4fa529" path="/var/lib/kubelet/pods/205da31d-858f-4e4b-a141-77427f4fa529/volumes" Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.213301 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ssq2s"] Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.666498 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"584885db-41e8-4667-96bd-3f180ac41ae4","Type":"ContainerStarted","Data":"2d33d0a7eecac591cabc5b3a260335a1c13d64f9af717645144ebcb83140ee42"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.670592 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"aa35f7d5-c06d-4d2e-8806-c6e638c9db02","Type":"ContainerStarted","Data":"5741fb879744460e5b622229df182e8ce99b93bf4490a3242bdd29d09e036d10"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.671515 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.674961 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b","Type":"ContainerStarted","Data":"83cdd6b4cd9e8d755c60d2a0a053ebd5a37fe1593c1c3f30a04ecdfbf0c028b8"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.679175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"48888a7d-f908-4f3c-9335-d2ebe8e19690","Type":"ContainerStarted","Data":"7dcb1fa1e3195bc97b416820901c51189fb305b76af577918e797977bccaa9ee"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.683860 4820 generic.go:334] "Generic (PLEG): container finished" podID="66252276-3151-4fc8-accd-e6f036d64ba5" containerID="5537e1d6936e85ac8b352ab74ee7c6e08900ddfb764c8153fec2374b451be7e3" exitCode=0 Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.683970 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wxbt2" event={"ID":"66252276-3151-4fc8-accd-e6f036d64ba5","Type":"ContainerDied","Data":"5537e1d6936e85ac8b352ab74ee7c6e08900ddfb764c8153fec2374b451be7e3"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.683997 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wxbt2" event={"ID":"66252276-3151-4fc8-accd-e6f036d64ba5","Type":"ContainerStarted","Data":"55b2fe5c79a8716fd7c67a2a57531c6c4a6bb7fa6cc30e22e046b34d4ea23e54"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.688111 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.137045207 podStartE2EDuration="20.688091975s" podCreationTimestamp="2026-02-01 14:36:19 +0000 UTC" firstStartedPulling="2026-02-01 14:36:29.533686394 +0000 UTC m=+931.054052678" lastFinishedPulling="2026-02-01 14:36:37.084733162 +0000 UTC m=+938.605099446" observedRunningTime="2026-02-01 14:36:39.686795514 +0000 UTC m=+941.207161798" watchObservedRunningTime="2026-02-01 14:36:39.688091975 +0000 UTC m=+941.208458259" Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.703672 4820 generic.go:334] "Generic (PLEG): container finished" podID="ee497037-4f55-4f46-87c8-959a7a70a944" containerID="03ca94aee2a85410d2f97e551f5253e0e9b30516b6a3b569a5a009a7411caaf5" exitCode=0 Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.703785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" event={"ID":"ee497037-4f55-4f46-87c8-959a7a70a944","Type":"ContainerDied","Data":"03ca94aee2a85410d2f97e551f5253e0e9b30516b6a3b569a5a009a7411caaf5"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.703812 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" event={"ID":"ee497037-4f55-4f46-87c8-959a7a70a944","Type":"ContainerStarted","Data":"8b00f1a25c0736993e6f34eed22e6d9f765515f9871fec10ad64fa438bc39070"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.710649 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"834dff46-eb6f-4646-830a-a665bcd1461b","Type":"ContainerStarted","Data":"ba5dc266666ca090cdd8d6040e3399624af7fc09e03a870c966dd4c4e316dd5e"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.711438 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.715042 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l4qhs" event={"ID":"e613c7d2-9b6b-4e3c-93d7-617b826931c7","Type":"ContainerStarted","Data":"a5596bed9e7c7a05b89d39f45a0889065350f84f1a7a5a34600b42a4b26ada21"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.719339 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rqms2" event={"ID":"75447ae7-687e-45f6-925f-7091cd5c8930","Type":"ContainerStarted","Data":"1f6271db3e143ef076f2408370bc8d559ff71963782d971705252c78d3002ccc"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.726354 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wrqhs" event={"ID":"0badf713-2cde-439b-8a0e-c2eedac05b99","Type":"ContainerStarted","Data":"9e7f3f64c84958c43c8c07a021fc84f6894969f53471074ab6a819999e678a56"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.726577 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wrqhs" Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.751422 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"530f8225-6e15-4177-9b64-24e4f767f6c5","Type":"ContainerStarted","Data":"ed67ff52a71b7be2570e379585fc5d071812cf77c47a88a016723dc4eabeec7f"} Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.779216 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wrqhs" podStartSLOduration=7.571339377 podStartE2EDuration="14.779197891s" podCreationTimestamp="2026-02-01 14:36:25 +0000 UTC" firstStartedPulling="2026-02-01 14:36:30.101760417 +0000 UTC m=+931.622126701" lastFinishedPulling="2026-02-01 14:36:37.309618931 +0000 UTC m=+938.829985215" observedRunningTime="2026-02-01 14:36:39.771502423 +0000 UTC m=+941.291868697" watchObservedRunningTime="2026-02-01 14:36:39.779197891 +0000 UTC m=+941.299564175" Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.861584 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.18143156 podStartE2EDuration="18.861563833s" podCreationTimestamp="2026-02-01 14:36:21 +0000 UTC" firstStartedPulling="2026-02-01 14:36:30.025573555 +0000 UTC m=+931.545939839" lastFinishedPulling="2026-02-01 14:36:38.705705828 +0000 UTC m=+940.226072112" observedRunningTime="2026-02-01 14:36:39.840079791 +0000 UTC m=+941.360446075" watchObservedRunningTime="2026-02-01 14:36:39.861563833 +0000 UTC m=+941.381930117" Feb 01 14:36:39 crc kubenswrapper[4820]: I0201 14:36:39.987634 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.072218 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jdr4\" (UniqueName: \"kubernetes.io/projected/ee497037-4f55-4f46-87c8-959a7a70a944-kube-api-access-8jdr4\") pod \"ee497037-4f55-4f46-87c8-959a7a70a944\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.072281 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-config\") pod \"ee497037-4f55-4f46-87c8-959a7a70a944\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.072312 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-ovsdbserver-nb\") pod \"ee497037-4f55-4f46-87c8-959a7a70a944\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.072344 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-dns-svc\") pod \"ee497037-4f55-4f46-87c8-959a7a70a944\" (UID: \"ee497037-4f55-4f46-87c8-959a7a70a944\") " Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.091158 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee497037-4f55-4f46-87c8-959a7a70a944" (UID: "ee497037-4f55-4f46-87c8-959a7a70a944"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.091479 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee497037-4f55-4f46-87c8-959a7a70a944-kube-api-access-8jdr4" (OuterVolumeSpecName: "kube-api-access-8jdr4") pod "ee497037-4f55-4f46-87c8-959a7a70a944" (UID: "ee497037-4f55-4f46-87c8-959a7a70a944"). InnerVolumeSpecName "kube-api-access-8jdr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.091761 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-config" (OuterVolumeSpecName: "config") pod "ee497037-4f55-4f46-87c8-959a7a70a944" (UID: "ee497037-4f55-4f46-87c8-959a7a70a944"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.092212 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee497037-4f55-4f46-87c8-959a7a70a944" (UID: "ee497037-4f55-4f46-87c8-959a7a70a944"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.173929 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jdr4\" (UniqueName: \"kubernetes.io/projected/ee497037-4f55-4f46-87c8-959a7a70a944-kube-api-access-8jdr4\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.173959 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.173969 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.173978 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee497037-4f55-4f46-87c8-959a7a70a944-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.753757 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wxbt2" event={"ID":"66252276-3151-4fc8-accd-e6f036d64ba5","Type":"ContainerStarted","Data":"b28a153dc011e64e9a490df9f4b31552ab784ef2f8ec11f5138f823947e583f6"} Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.754521 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.756315 4820 generic.go:334] "Generic (PLEG): container finished" podID="75447ae7-687e-45f6-925f-7091cd5c8930" containerID="1f6271db3e143ef076f2408370bc8d559ff71963782d971705252c78d3002ccc" exitCode=0 Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.756397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rqms2" event={"ID":"75447ae7-687e-45f6-925f-7091cd5c8930","Type":"ContainerDied","Data":"1f6271db3e143ef076f2408370bc8d559ff71963782d971705252c78d3002ccc"} Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.757967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c49d65b-e444-406e-8b45-e95ba6bbb52b","Type":"ContainerStarted","Data":"27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13"} Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.760392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" event={"ID":"ee497037-4f55-4f46-87c8-959a7a70a944","Type":"ContainerDied","Data":"8b00f1a25c0736993e6f34eed22e6d9f765515f9871fec10ad64fa438bc39070"} Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.760445 4820 scope.go:117] "RemoveContainer" containerID="03ca94aee2a85410d2f97e551f5253e0e9b30516b6a3b569a5a009a7411caaf5" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.760574 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ssq2s" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.780388 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-wxbt2" podStartSLOduration=2.780368905 podStartE2EDuration="2.780368905s" podCreationTimestamp="2026-02-01 14:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:36:40.772562444 +0000 UTC m=+942.292928728" watchObservedRunningTime="2026-02-01 14:36:40.780368905 +0000 UTC m=+942.300735189" Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.847940 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ssq2s"] Feb 01 14:36:40 crc kubenswrapper[4820]: I0201 14:36:40.852542 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ssq2s"] Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.206344 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee497037-4f55-4f46-87c8-959a7a70a944" path="/var/lib/kubelet/pods/ee497037-4f55-4f46-87c8-959a7a70a944/volumes" Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.771310 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l4qhs" event={"ID":"e613c7d2-9b6b-4e3c-93d7-617b826931c7","Type":"ContainerStarted","Data":"31d0c2e1455c61c3a2bb923ce6a38f5843a6afb35d3b9d974c0dd9905173ce4b"} Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.775567 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rqms2" event={"ID":"75447ae7-687e-45f6-925f-7091cd5c8930","Type":"ContainerStarted","Data":"a6cdb34955fb6ae5657a93f11b9b9c721c8dd59c03bbcc858f8e282b886d6522"} Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.775609 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rqms2" event={"ID":"75447ae7-687e-45f6-925f-7091cd5c8930","Type":"ContainerStarted","Data":"1726dd37a9d82b8e8720e95e84cbce2e1a95494ef04f36747feb65271fc2d44c"} Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.777032 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"48888a7d-f908-4f3c-9335-d2ebe8e19690","Type":"ContainerStarted","Data":"2fdcb54d1dcd8ffa5eade76b9e8084d2b95588e0bccd683d7be5327cdab51f73"} Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.781503 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"584885db-41e8-4667-96bd-3f180ac41ae4","Type":"ContainerStarted","Data":"f46dc7590dafddbb90de3e0fad31db1a1a687c61ae044d7f32d8dd45e97fbb53"} Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.836345 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.844419505 podStartE2EDuration="15.836321371s" podCreationTimestamp="2026-02-01 14:36:26 +0000 UTC" firstStartedPulling="2026-02-01 14:36:29.745307119 +0000 UTC m=+931.265673403" lastFinishedPulling="2026-02-01 14:36:40.737208985 +0000 UTC m=+942.257575269" observedRunningTime="2026-02-01 14:36:41.817471413 +0000 UTC m=+943.337837697" watchObservedRunningTime="2026-02-01 14:36:41.836321371 +0000 UTC m=+943.356687655" Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.837000 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-l4qhs" podStartSLOduration=3.198925195 podStartE2EDuration="4.836991616s" podCreationTimestamp="2026-02-01 14:36:37 +0000 UTC" firstStartedPulling="2026-02-01 14:36:39.043372288 +0000 UTC m=+940.563738572" lastFinishedPulling="2026-02-01 14:36:40.681438709 +0000 UTC m=+942.201804993" observedRunningTime="2026-02-01 14:36:41.797500736 +0000 UTC m=+943.317867020" watchObservedRunningTime="2026-02-01 14:36:41.836991616 +0000 UTC m=+943.357357900" Feb 01 14:36:41 crc kubenswrapper[4820]: I0201 14:36:41.856306 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.827268878 podStartE2EDuration="16.856288186s" podCreationTimestamp="2026-02-01 14:36:25 +0000 UTC" firstStartedPulling="2026-02-01 14:36:30.218101456 +0000 UTC m=+931.738467740" lastFinishedPulling="2026-02-01 14:36:41.247120764 +0000 UTC m=+942.767487048" observedRunningTime="2026-02-01 14:36:41.844857588 +0000 UTC m=+943.365223882" watchObservedRunningTime="2026-02-01 14:36:41.856288186 +0000 UTC m=+943.376654470" Feb 01 14:36:42 crc kubenswrapper[4820]: I0201 14:36:42.094163 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:42 crc kubenswrapper[4820]: I0201 14:36:42.094217 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:42 crc kubenswrapper[4820]: I0201 14:36:42.150187 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:42 crc kubenswrapper[4820]: I0201 14:36:42.788662 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:42 crc kubenswrapper[4820]: I0201 14:36:42.789191 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:36:42 crc kubenswrapper[4820]: I0201 14:36:42.806555 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rqms2" podStartSLOduration=10.687508828 podStartE2EDuration="17.806535412s" podCreationTimestamp="2026-02-01 14:36:25 +0000 UTC" firstStartedPulling="2026-02-01 14:36:30.117372297 +0000 UTC m=+931.637738581" lastFinishedPulling="2026-02-01 14:36:37.236398891 +0000 UTC m=+938.756765165" observedRunningTime="2026-02-01 14:36:42.804091042 +0000 UTC m=+944.324457326" watchObservedRunningTime="2026-02-01 14:36:42.806535412 +0000 UTC m=+944.326901696" Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.107705 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.107757 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.155451 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.798586 4820 generic.go:334] "Generic (PLEG): container finished" podID="530f8225-6e15-4177-9b64-24e4f767f6c5" containerID="ed67ff52a71b7be2570e379585fc5d071812cf77c47a88a016723dc4eabeec7f" exitCode=0 Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.798668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"530f8225-6e15-4177-9b64-24e4f767f6c5","Type":"ContainerDied","Data":"ed67ff52a71b7be2570e379585fc5d071812cf77c47a88a016723dc4eabeec7f"} Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.802330 4820 generic.go:334] "Generic (PLEG): container finished" podID="e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b" containerID="83cdd6b4cd9e8d755c60d2a0a053ebd5a37fe1593c1c3f30a04ecdfbf0c028b8" exitCode=0 Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.802364 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b","Type":"ContainerDied","Data":"83cdd6b4cd9e8d755c60d2a0a053ebd5a37fe1593c1c3f30a04ecdfbf0c028b8"} Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.864448 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 01 14:36:43 crc kubenswrapper[4820]: I0201 14:36:43.877181 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.216924 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 01 14:36:44 crc kubenswrapper[4820]: E0201 14:36:44.228242 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee497037-4f55-4f46-87c8-959a7a70a944" containerName="init" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.228302 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee497037-4f55-4f46-87c8-959a7a70a944" containerName="init" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.228822 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee497037-4f55-4f46-87c8-959a7a70a944" containerName="init" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.230168 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.230275 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.233360 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.233945 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zb654" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.234010 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.234325 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.345821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.346175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a1b1815f-5154-480a-be95-af29b7635c0c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.346208 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.346235 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czxps\" (UniqueName: \"kubernetes.io/projected/a1b1815f-5154-480a-be95-af29b7635c0c-kube-api-access-czxps\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.346259 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.346293 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1b1815f-5154-480a-be95-af29b7635c0c-scripts\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.346317 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1b1815f-5154-480a-be95-af29b7635c0c-config\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.447728 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.447805 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czxps\" (UniqueName: \"kubernetes.io/projected/a1b1815f-5154-480a-be95-af29b7635c0c-kube-api-access-czxps\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.447844 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.447904 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1b1815f-5154-480a-be95-af29b7635c0c-scripts\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.447942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1b1815f-5154-480a-be95-af29b7635c0c-config\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.448007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.448046 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a1b1815f-5154-480a-be95-af29b7635c0c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.448745 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a1b1815f-5154-480a-be95-af29b7635c0c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.449122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1b1815f-5154-480a-be95-af29b7635c0c-scripts\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.449187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1b1815f-5154-480a-be95-af29b7635c0c-config\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.453600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.453600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.454235 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1b1815f-5154-480a-be95-af29b7635c0c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.463993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czxps\" (UniqueName: \"kubernetes.io/projected/a1b1815f-5154-480a-be95-af29b7635c0c-kube-api-access-czxps\") pod \"ovn-northd-0\" (UID: \"a1b1815f-5154-480a-be95-af29b7635c0c\") " pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.561462 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.827464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"530f8225-6e15-4177-9b64-24e4f767f6c5","Type":"ContainerStarted","Data":"ee35b3c5dd87a6bb99ecaf5b805f4f74364c4dff6d07752decc4c5ba9f2dc598"} Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.834119 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b","Type":"ContainerStarted","Data":"11b0181769a9dc1b2c5bfeec0b33e37f401edb7a344d902654d7c73b8108c995"} Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.854484 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.56001334 podStartE2EDuration="26.854469129s" podCreationTimestamp="2026-02-01 14:36:18 +0000 UTC" firstStartedPulling="2026-02-01 14:36:30.015643883 +0000 UTC m=+931.536010167" lastFinishedPulling="2026-02-01 14:36:37.310099672 +0000 UTC m=+938.830465956" observedRunningTime="2026-02-01 14:36:44.846521896 +0000 UTC m=+946.366888190" watchObservedRunningTime="2026-02-01 14:36:44.854469129 +0000 UTC m=+946.374835413" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.855226 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.881745 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.184920188 podStartE2EDuration="28.881724841s" podCreationTimestamp="2026-02-01 14:36:16 +0000 UTC" firstStartedPulling="2026-02-01 14:36:29.616059277 +0000 UTC m=+931.136425561" lastFinishedPulling="2026-02-01 14:36:37.312863929 +0000 UTC m=+938.833230214" observedRunningTime="2026-02-01 14:36:44.871432902 +0000 UTC m=+946.391799206" watchObservedRunningTime="2026-02-01 14:36:44.881724841 +0000 UTC m=+946.402091125" Feb 01 14:36:44 crc kubenswrapper[4820]: I0201 14:36:44.996538 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 01 14:36:45 crc kubenswrapper[4820]: I0201 14:36:45.842350 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a1b1815f-5154-480a-be95-af29b7635c0c","Type":"ContainerStarted","Data":"cb870991209564727615b6e1a8e6adf963cbad0100f7ee648d0cb055a81d15b1"} Feb 01 14:36:47 crc kubenswrapper[4820]: I0201 14:36:47.948585 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 01 14:36:47 crc kubenswrapper[4820]: I0201 14:36:47.949535 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 01 14:36:48 crc kubenswrapper[4820]: I0201 14:36:48.467117 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:36:48 crc kubenswrapper[4820]: I0201 14:36:48.526494 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxdgb"] Feb 01 14:36:48 crc kubenswrapper[4820]: I0201 14:36:48.526800 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" podUID="8449fcd6-ef50-459d-9b6e-77c0065e2880" containerName="dnsmasq-dns" containerID="cri-o://f012224863bbd8454dd125e7c76568a19a7ac7208e2e89e9bd0c5a0f6aeb7006" gracePeriod=10 Feb 01 14:36:48 crc kubenswrapper[4820]: I0201 14:36:48.870435 4820 generic.go:334] "Generic (PLEG): container finished" podID="8449fcd6-ef50-459d-9b6e-77c0065e2880" containerID="f012224863bbd8454dd125e7c76568a19a7ac7208e2e89e9bd0c5a0f6aeb7006" exitCode=0 Feb 01 14:36:48 crc kubenswrapper[4820]: I0201 14:36:48.871272 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" event={"ID":"8449fcd6-ef50-459d-9b6e-77c0065e2880","Type":"ContainerDied","Data":"f012224863bbd8454dd125e7c76568a19a7ac7208e2e89e9bd0c5a0f6aeb7006"} Feb 01 14:36:48 crc kubenswrapper[4820]: I0201 14:36:48.957979 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.026973 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-config\") pod \"8449fcd6-ef50-459d-9b6e-77c0065e2880\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.027050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pxhz\" (UniqueName: \"kubernetes.io/projected/8449fcd6-ef50-459d-9b6e-77c0065e2880-kube-api-access-4pxhz\") pod \"8449fcd6-ef50-459d-9b6e-77c0065e2880\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.027203 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-dns-svc\") pod \"8449fcd6-ef50-459d-9b6e-77c0065e2880\" (UID: \"8449fcd6-ef50-459d-9b6e-77c0065e2880\") " Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.033949 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8449fcd6-ef50-459d-9b6e-77c0065e2880-kube-api-access-4pxhz" (OuterVolumeSpecName: "kube-api-access-4pxhz") pod "8449fcd6-ef50-459d-9b6e-77c0065e2880" (UID: "8449fcd6-ef50-459d-9b6e-77c0065e2880"). InnerVolumeSpecName "kube-api-access-4pxhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.067691 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-config" (OuterVolumeSpecName: "config") pod "8449fcd6-ef50-459d-9b6e-77c0065e2880" (UID: "8449fcd6-ef50-459d-9b6e-77c0065e2880"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.078739 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8449fcd6-ef50-459d-9b6e-77c0065e2880" (UID: "8449fcd6-ef50-459d-9b6e-77c0065e2880"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.129131 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pxhz\" (UniqueName: \"kubernetes.io/projected/8449fcd6-ef50-459d-9b6e-77c0065e2880-kube-api-access-4pxhz\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.129165 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.129176 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8449fcd6-ef50-459d-9b6e-77c0065e2880-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.242190 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.242596 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.519293 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.519326 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.877831 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a1b1815f-5154-480a-be95-af29b7635c0c","Type":"ContainerStarted","Data":"041681c01ca0b835bb1f2ec8cc033c53dc19cd88182631b802230167e76842c9"} Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.877901 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a1b1815f-5154-480a-be95-af29b7635c0c","Type":"ContainerStarted","Data":"97cb1b755b12e1a7fb0bf8c46206bc6dab573a15fb7721c36afedf68edacb228"} Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.878168 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.879745 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" event={"ID":"8449fcd6-ef50-459d-9b6e-77c0065e2880","Type":"ContainerDied","Data":"e6f7288324f47e514c5362d1214901f64daf30c79d5fa282861f9fac00d4eeac"} Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.879780 4820 scope.go:117] "RemoveContainer" containerID="f012224863bbd8454dd125e7c76568a19a7ac7208e2e89e9bd0c5a0f6aeb7006" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.879921 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxdgb" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.906360 4820 scope.go:117] "RemoveContainer" containerID="663f45eea3857f24e11446098fcdcad0e2edf271e93a280a03c1c49e72af94dc" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.913855 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.107517157 podStartE2EDuration="5.913839641s" podCreationTimestamp="2026-02-01 14:36:44 +0000 UTC" firstStartedPulling="2026-02-01 14:36:45.007793327 +0000 UTC m=+946.528159611" lastFinishedPulling="2026-02-01 14:36:48.814115811 +0000 UTC m=+950.334482095" observedRunningTime="2026-02-01 14:36:49.903213692 +0000 UTC m=+951.423579996" watchObservedRunningTime="2026-02-01 14:36:49.913839641 +0000 UTC m=+951.434205925" Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.918417 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxdgb"] Feb 01 14:36:49 crc kubenswrapper[4820]: I0201 14:36:49.923617 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxdgb"] Feb 01 14:36:51 crc kubenswrapper[4820]: I0201 14:36:51.207817 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8449fcd6-ef50-459d-9b6e-77c0065e2880" path="/var/lib/kubelet/pods/8449fcd6-ef50-459d-9b6e-77c0065e2880/volumes" Feb 01 14:36:51 crc kubenswrapper[4820]: I0201 14:36:51.237398 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 01 14:36:51 crc kubenswrapper[4820]: I0201 14:36:51.301736 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 01 14:36:51 crc kubenswrapper[4820]: I0201 14:36:51.596172 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 01 14:36:52 crc kubenswrapper[4820]: I0201 14:36:52.216381 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:52 crc kubenswrapper[4820]: I0201 14:36:52.284955 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.127358 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7cb0-account-create-update-cn5z4"] Feb 01 14:36:55 crc kubenswrapper[4820]: E0201 14:36:55.127750 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8449fcd6-ef50-459d-9b6e-77c0065e2880" containerName="init" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.127763 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8449fcd6-ef50-459d-9b6e-77c0065e2880" containerName="init" Feb 01 14:36:55 crc kubenswrapper[4820]: E0201 14:36:55.127798 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8449fcd6-ef50-459d-9b6e-77c0065e2880" containerName="dnsmasq-dns" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.127804 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8449fcd6-ef50-459d-9b6e-77c0065e2880" containerName="dnsmasq-dns" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.128007 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8449fcd6-ef50-459d-9b6e-77c0065e2880" containerName="dnsmasq-dns" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.128623 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.131023 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.145054 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-h7kmt"] Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.146252 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.161078 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-h7kmt"] Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.169511 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7cb0-account-create-update-cn5z4"] Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.221505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/645c5b61-b15a-4734-a5ab-49c05d6046ec-operator-scripts\") pod \"glance-7cb0-account-create-update-cn5z4\" (UID: \"645c5b61-b15a-4734-a5ab-49c05d6046ec\") " pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.221590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9k58\" (UniqueName: \"kubernetes.io/projected/645c5b61-b15a-4734-a5ab-49c05d6046ec-kube-api-access-d9k58\") pod \"glance-7cb0-account-create-update-cn5z4\" (UID: \"645c5b61-b15a-4734-a5ab-49c05d6046ec\") " pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.323291 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcbcq\" (UniqueName: \"kubernetes.io/projected/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-kube-api-access-mcbcq\") pod \"glance-db-create-h7kmt\" (UID: \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\") " pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.323335 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9k58\" (UniqueName: \"kubernetes.io/projected/645c5b61-b15a-4734-a5ab-49c05d6046ec-kube-api-access-d9k58\") pod \"glance-7cb0-account-create-update-cn5z4\" (UID: \"645c5b61-b15a-4734-a5ab-49c05d6046ec\") " pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.323360 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-operator-scripts\") pod \"glance-db-create-h7kmt\" (UID: \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\") " pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.323466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/645c5b61-b15a-4734-a5ab-49c05d6046ec-operator-scripts\") pod \"glance-7cb0-account-create-update-cn5z4\" (UID: \"645c5b61-b15a-4734-a5ab-49c05d6046ec\") " pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.324165 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/645c5b61-b15a-4734-a5ab-49c05d6046ec-operator-scripts\") pod \"glance-7cb0-account-create-update-cn5z4\" (UID: \"645c5b61-b15a-4734-a5ab-49c05d6046ec\") " pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.348073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9k58\" (UniqueName: \"kubernetes.io/projected/645c5b61-b15a-4734-a5ab-49c05d6046ec-kube-api-access-d9k58\") pod \"glance-7cb0-account-create-update-cn5z4\" (UID: \"645c5b61-b15a-4734-a5ab-49c05d6046ec\") " pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.424664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcbcq\" (UniqueName: \"kubernetes.io/projected/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-kube-api-access-mcbcq\") pod \"glance-db-create-h7kmt\" (UID: \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\") " pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.424719 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-operator-scripts\") pod \"glance-db-create-h7kmt\" (UID: \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\") " pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.425554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-operator-scripts\") pod \"glance-db-create-h7kmt\" (UID: \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\") " pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.442444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcbcq\" (UniqueName: \"kubernetes.io/projected/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-kube-api-access-mcbcq\") pod \"glance-db-create-h7kmt\" (UID: \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\") " pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.455483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.487258 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.883109 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7cb0-account-create-update-cn5z4"] Feb 01 14:36:55 crc kubenswrapper[4820]: W0201 14:36:55.890675 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod645c5b61_b15a_4734_a5ab_49c05d6046ec.slice/crio-b557f5e15074a84ebdb47f3106c1daeb236b632fc920bce432e18bc38e0a211a WatchSource:0}: Error finding container b557f5e15074a84ebdb47f3106c1daeb236b632fc920bce432e18bc38e0a211a: Status 404 returned error can't find the container with id b557f5e15074a84ebdb47f3106c1daeb236b632fc920bce432e18bc38e0a211a Feb 01 14:36:55 crc kubenswrapper[4820]: I0201 14:36:55.920605 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb0-account-create-update-cn5z4" event={"ID":"645c5b61-b15a-4734-a5ab-49c05d6046ec","Type":"ContainerStarted","Data":"b557f5e15074a84ebdb47f3106c1daeb236b632fc920bce432e18bc38e0a211a"} Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.005066 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-h7kmt"] Feb 01 14:36:56 crc kubenswrapper[4820]: W0201 14:36:56.008600 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2adaa104_f99c_44b1_b7de_d9cb4b660fa3.slice/crio-02b250429226e2eeae9561de4f5b54bf9f17bc631cdb26990a444237b34be87d WatchSource:0}: Error finding container 02b250429226e2eeae9561de4f5b54bf9f17bc631cdb26990a444237b34be87d: Status 404 returned error can't find the container with id 02b250429226e2eeae9561de4f5b54bf9f17bc631cdb26990a444237b34be87d Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.574097 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-m499w"] Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.575410 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m499w" Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.576975 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.633049 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m499w"] Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.746023 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4c9q\" (UniqueName: \"kubernetes.io/projected/5ceffe48-7924-4f28-acbe-52ad3aa1c827-kube-api-access-d4c9q\") pod \"root-account-create-update-m499w\" (UID: \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\") " pod="openstack/root-account-create-update-m499w" Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.746074 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ceffe48-7924-4f28-acbe-52ad3aa1c827-operator-scripts\") pod \"root-account-create-update-m499w\" (UID: \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\") " pod="openstack/root-account-create-update-m499w" Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.847661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4c9q\" (UniqueName: \"kubernetes.io/projected/5ceffe48-7924-4f28-acbe-52ad3aa1c827-kube-api-access-d4c9q\") pod \"root-account-create-update-m499w\" (UID: \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\") " pod="openstack/root-account-create-update-m499w" Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.847727 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ceffe48-7924-4f28-acbe-52ad3aa1c827-operator-scripts\") pod \"root-account-create-update-m499w\" (UID: \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\") " pod="openstack/root-account-create-update-m499w" Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.848556 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ceffe48-7924-4f28-acbe-52ad3aa1c827-operator-scripts\") pod \"root-account-create-update-m499w\" (UID: \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\") " pod="openstack/root-account-create-update-m499w" Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.865748 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4c9q\" (UniqueName: \"kubernetes.io/projected/5ceffe48-7924-4f28-acbe-52ad3aa1c827-kube-api-access-d4c9q\") pod \"root-account-create-update-m499w\" (UID: \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\") " pod="openstack/root-account-create-update-m499w" Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.930521 4820 generic.go:334] "Generic (PLEG): container finished" podID="645c5b61-b15a-4734-a5ab-49c05d6046ec" containerID="1c485e86d7d95a29723eaf1e539a0c40d6f9446e45011006a65dbf1107740cf4" exitCode=0 Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.930593 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb0-account-create-update-cn5z4" event={"ID":"645c5b61-b15a-4734-a5ab-49c05d6046ec","Type":"ContainerDied","Data":"1c485e86d7d95a29723eaf1e539a0c40d6f9446e45011006a65dbf1107740cf4"} Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.932610 4820 generic.go:334] "Generic (PLEG): container finished" podID="2adaa104-f99c-44b1-b7de-d9cb4b660fa3" containerID="69ee4b575970bf44d5d120fa2ef8b20e277ab4ef2de30fff542301b2c5ca8686" exitCode=0 Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.932684 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h7kmt" event={"ID":"2adaa104-f99c-44b1-b7de-d9cb4b660fa3","Type":"ContainerDied","Data":"69ee4b575970bf44d5d120fa2ef8b20e277ab4ef2de30fff542301b2c5ca8686"} Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.932926 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h7kmt" event={"ID":"2adaa104-f99c-44b1-b7de-d9cb4b660fa3","Type":"ContainerStarted","Data":"02b250429226e2eeae9561de4f5b54bf9f17bc631cdb26990a444237b34be87d"} Feb 01 14:36:56 crc kubenswrapper[4820]: I0201 14:36:56.935166 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m499w" Feb 01 14:36:57 crc kubenswrapper[4820]: I0201 14:36:57.348427 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m499w"] Feb 01 14:36:57 crc kubenswrapper[4820]: I0201 14:36:57.940577 4820 generic.go:334] "Generic (PLEG): container finished" podID="5ceffe48-7924-4f28-acbe-52ad3aa1c827" containerID="f2f787e2f9ab3ca64923d1e68673f434c28a6638f67baa009603248e9682ef60" exitCode=0 Feb 01 14:36:57 crc kubenswrapper[4820]: I0201 14:36:57.941221 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m499w" event={"ID":"5ceffe48-7924-4f28-acbe-52ad3aa1c827","Type":"ContainerDied","Data":"f2f787e2f9ab3ca64923d1e68673f434c28a6638f67baa009603248e9682ef60"} Feb 01 14:36:57 crc kubenswrapper[4820]: I0201 14:36:57.941250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m499w" event={"ID":"5ceffe48-7924-4f28-acbe-52ad3aa1c827","Type":"ContainerStarted","Data":"feacbf6bda1d6a75915e461fe8ddad761b068e9e16884028a469c89703b48e01"} Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.397967 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.404960 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.574185 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9k58\" (UniqueName: \"kubernetes.io/projected/645c5b61-b15a-4734-a5ab-49c05d6046ec-kube-api-access-d9k58\") pod \"645c5b61-b15a-4734-a5ab-49c05d6046ec\" (UID: \"645c5b61-b15a-4734-a5ab-49c05d6046ec\") " Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.574274 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/645c5b61-b15a-4734-a5ab-49c05d6046ec-operator-scripts\") pod \"645c5b61-b15a-4734-a5ab-49c05d6046ec\" (UID: \"645c5b61-b15a-4734-a5ab-49c05d6046ec\") " Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.574337 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcbcq\" (UniqueName: \"kubernetes.io/projected/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-kube-api-access-mcbcq\") pod \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\" (UID: \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\") " Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.574408 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-operator-scripts\") pod \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\" (UID: \"2adaa104-f99c-44b1-b7de-d9cb4b660fa3\") " Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.575252 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645c5b61-b15a-4734-a5ab-49c05d6046ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "645c5b61-b15a-4734-a5ab-49c05d6046ec" (UID: "645c5b61-b15a-4734-a5ab-49c05d6046ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.575292 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2adaa104-f99c-44b1-b7de-d9cb4b660fa3" (UID: "2adaa104-f99c-44b1-b7de-d9cb4b660fa3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.584228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-kube-api-access-mcbcq" (OuterVolumeSpecName: "kube-api-access-mcbcq") pod "2adaa104-f99c-44b1-b7de-d9cb4b660fa3" (UID: "2adaa104-f99c-44b1-b7de-d9cb4b660fa3"). InnerVolumeSpecName "kube-api-access-mcbcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.587080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645c5b61-b15a-4734-a5ab-49c05d6046ec-kube-api-access-d9k58" (OuterVolumeSpecName: "kube-api-access-d9k58") pod "645c5b61-b15a-4734-a5ab-49c05d6046ec" (UID: "645c5b61-b15a-4734-a5ab-49c05d6046ec"). InnerVolumeSpecName "kube-api-access-d9k58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.676640 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.676679 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9k58\" (UniqueName: \"kubernetes.io/projected/645c5b61-b15a-4734-a5ab-49c05d6046ec-kube-api-access-d9k58\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.676691 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/645c5b61-b15a-4734-a5ab-49c05d6046ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.676699 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcbcq\" (UniqueName: \"kubernetes.io/projected/2adaa104-f99c-44b1-b7de-d9cb4b660fa3-kube-api-access-mcbcq\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.948767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h7kmt" event={"ID":"2adaa104-f99c-44b1-b7de-d9cb4b660fa3","Type":"ContainerDied","Data":"02b250429226e2eeae9561de4f5b54bf9f17bc631cdb26990a444237b34be87d"} Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.948810 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b250429226e2eeae9561de4f5b54bf9f17bc631cdb26990a444237b34be87d" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.948834 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h7kmt" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.950295 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb0-account-create-update-cn5z4" event={"ID":"645c5b61-b15a-4734-a5ab-49c05d6046ec","Type":"ContainerDied","Data":"b557f5e15074a84ebdb47f3106c1daeb236b632fc920bce432e18bc38e0a211a"} Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.950317 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb0-account-create-update-cn5z4" Feb 01 14:36:58 crc kubenswrapper[4820]: I0201 14:36:58.950332 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b557f5e15074a84ebdb47f3106c1daeb236b632fc920bce432e18bc38e0a211a" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.308623 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m499w" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.466785 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-tlfvt"] Feb 01 14:36:59 crc kubenswrapper[4820]: E0201 14:36:59.467102 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645c5b61-b15a-4734-a5ab-49c05d6046ec" containerName="mariadb-account-create-update" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.467125 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="645c5b61-b15a-4734-a5ab-49c05d6046ec" containerName="mariadb-account-create-update" Feb 01 14:36:59 crc kubenswrapper[4820]: E0201 14:36:59.467173 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2adaa104-f99c-44b1-b7de-d9cb4b660fa3" containerName="mariadb-database-create" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.467184 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adaa104-f99c-44b1-b7de-d9cb4b660fa3" containerName="mariadb-database-create" Feb 01 14:36:59 crc kubenswrapper[4820]: E0201 14:36:59.467206 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ceffe48-7924-4f28-acbe-52ad3aa1c827" containerName="mariadb-account-create-update" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.467214 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ceffe48-7924-4f28-acbe-52ad3aa1c827" containerName="mariadb-account-create-update" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.467384 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ceffe48-7924-4f28-acbe-52ad3aa1c827" containerName="mariadb-account-create-update" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.467408 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="645c5b61-b15a-4734-a5ab-49c05d6046ec" containerName="mariadb-account-create-update" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.467422 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2adaa104-f99c-44b1-b7de-d9cb4b660fa3" containerName="mariadb-database-create" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.467890 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tlfvt" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.472397 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tlfvt"] Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.494399 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4c9q\" (UniqueName: \"kubernetes.io/projected/5ceffe48-7924-4f28-acbe-52ad3aa1c827-kube-api-access-d4c9q\") pod \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\" (UID: \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\") " Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.494596 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ceffe48-7924-4f28-acbe-52ad3aa1c827-operator-scripts\") pod \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\" (UID: \"5ceffe48-7924-4f28-acbe-52ad3aa1c827\") " Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.495243 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ceffe48-7924-4f28-acbe-52ad3aa1c827-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ceffe48-7924-4f28-acbe-52ad3aa1c827" (UID: "5ceffe48-7924-4f28-acbe-52ad3aa1c827"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.499326 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ceffe48-7924-4f28-acbe-52ad3aa1c827-kube-api-access-d4c9q" (OuterVolumeSpecName: "kube-api-access-d4c9q") pod "5ceffe48-7924-4f28-acbe-52ad3aa1c827" (UID: "5ceffe48-7924-4f28-acbe-52ad3aa1c827"). InnerVolumeSpecName "kube-api-access-d4c9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.555541 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6a2e-account-create-update-6zkcd"] Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.561207 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.566854 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.573154 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6a2e-account-create-update-6zkcd"] Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.600863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dv8\" (UniqueName: \"kubernetes.io/projected/0ef0e812-f6c4-4147-9437-163981354ea2-kube-api-access-x6dv8\") pod \"keystone-db-create-tlfvt\" (UID: \"0ef0e812-f6c4-4147-9437-163981354ea2\") " pod="openstack/keystone-db-create-tlfvt" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.601009 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0e812-f6c4-4147-9437-163981354ea2-operator-scripts\") pod \"keystone-db-create-tlfvt\" (UID: \"0ef0e812-f6c4-4147-9437-163981354ea2\") " pod="openstack/keystone-db-create-tlfvt" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.601099 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4c9q\" (UniqueName: \"kubernetes.io/projected/5ceffe48-7924-4f28-acbe-52ad3aa1c827-kube-api-access-d4c9q\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.601115 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ceffe48-7924-4f28-acbe-52ad3aa1c827-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.702443 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snp6g\" (UniqueName: \"kubernetes.io/projected/a04eb126-50a3-44e6-9741-345441291daf-kube-api-access-snp6g\") pod \"keystone-6a2e-account-create-update-6zkcd\" (UID: \"a04eb126-50a3-44e6-9741-345441291daf\") " pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.702501 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0e812-f6c4-4147-9437-163981354ea2-operator-scripts\") pod \"keystone-db-create-tlfvt\" (UID: \"0ef0e812-f6c4-4147-9437-163981354ea2\") " pod="openstack/keystone-db-create-tlfvt" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.702599 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04eb126-50a3-44e6-9741-345441291daf-operator-scripts\") pod \"keystone-6a2e-account-create-update-6zkcd\" (UID: \"a04eb126-50a3-44e6-9741-345441291daf\") " pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.702638 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dv8\" (UniqueName: \"kubernetes.io/projected/0ef0e812-f6c4-4147-9437-163981354ea2-kube-api-access-x6dv8\") pod \"keystone-db-create-tlfvt\" (UID: \"0ef0e812-f6c4-4147-9437-163981354ea2\") " pod="openstack/keystone-db-create-tlfvt" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.703180 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0e812-f6c4-4147-9437-163981354ea2-operator-scripts\") pod \"keystone-db-create-tlfvt\" (UID: \"0ef0e812-f6c4-4147-9437-163981354ea2\") " pod="openstack/keystone-db-create-tlfvt" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.722556 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dv8\" (UniqueName: \"kubernetes.io/projected/0ef0e812-f6c4-4147-9437-163981354ea2-kube-api-access-x6dv8\") pod \"keystone-db-create-tlfvt\" (UID: \"0ef0e812-f6c4-4147-9437-163981354ea2\") " pod="openstack/keystone-db-create-tlfvt" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.761419 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5nbk7"] Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.762740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5nbk7" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.775721 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5nbk7"] Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.788341 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tlfvt" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.803988 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d75a-account-create-update-xbb66"] Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.805287 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.807987 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.808599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snp6g\" (UniqueName: \"kubernetes.io/projected/a04eb126-50a3-44e6-9741-345441291daf-kube-api-access-snp6g\") pod \"keystone-6a2e-account-create-update-6zkcd\" (UID: \"a04eb126-50a3-44e6-9741-345441291daf\") " pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.808907 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04eb126-50a3-44e6-9741-345441291daf-operator-scripts\") pod \"keystone-6a2e-account-create-update-6zkcd\" (UID: \"a04eb126-50a3-44e6-9741-345441291daf\") " pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.810248 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04eb126-50a3-44e6-9741-345441291daf-operator-scripts\") pod \"keystone-6a2e-account-create-update-6zkcd\" (UID: \"a04eb126-50a3-44e6-9741-345441291daf\") " pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.818813 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d75a-account-create-update-xbb66"] Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.819414 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snp6g\" (UniqueName: \"kubernetes.io/projected/a04eb126-50a3-44e6-9741-345441291daf-kube-api-access-snp6g\") pod \"keystone-6a2e-account-create-update-6zkcd\" (UID: \"a04eb126-50a3-44e6-9741-345441291daf\") " pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.893429 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.911837 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5md7z\" (UniqueName: \"kubernetes.io/projected/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-kube-api-access-5md7z\") pod \"placement-db-create-5nbk7\" (UID: \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\") " pod="openstack/placement-db-create-5nbk7" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.912053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-operator-scripts\") pod \"placement-db-create-5nbk7\" (UID: \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\") " pod="openstack/placement-db-create-5nbk7" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.912131 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/377152ee-4878-4651-b7b8-3b3612bae8aa-operator-scripts\") pod \"placement-d75a-account-create-update-xbb66\" (UID: \"377152ee-4878-4651-b7b8-3b3612bae8aa\") " pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.912273 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2m4f\" (UniqueName: \"kubernetes.io/projected/377152ee-4878-4651-b7b8-3b3612bae8aa-kube-api-access-d2m4f\") pod \"placement-d75a-account-create-update-xbb66\" (UID: \"377152ee-4878-4651-b7b8-3b3612bae8aa\") " pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.961229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m499w" event={"ID":"5ceffe48-7924-4f28-acbe-52ad3aa1c827","Type":"ContainerDied","Data":"feacbf6bda1d6a75915e461fe8ddad761b068e9e16884028a469c89703b48e01"} Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.961540 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feacbf6bda1d6a75915e461fe8ddad761b068e9e16884028a469c89703b48e01" Feb 01 14:36:59 crc kubenswrapper[4820]: I0201 14:36:59.961636 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m499w" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.014013 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-operator-scripts\") pod \"placement-db-create-5nbk7\" (UID: \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\") " pod="openstack/placement-db-create-5nbk7" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.014072 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/377152ee-4878-4651-b7b8-3b3612bae8aa-operator-scripts\") pod \"placement-d75a-account-create-update-xbb66\" (UID: \"377152ee-4878-4651-b7b8-3b3612bae8aa\") " pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.014158 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2m4f\" (UniqueName: \"kubernetes.io/projected/377152ee-4878-4651-b7b8-3b3612bae8aa-kube-api-access-d2m4f\") pod \"placement-d75a-account-create-update-xbb66\" (UID: \"377152ee-4878-4651-b7b8-3b3612bae8aa\") " pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.014201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5md7z\" (UniqueName: \"kubernetes.io/projected/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-kube-api-access-5md7z\") pod \"placement-db-create-5nbk7\" (UID: \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\") " pod="openstack/placement-db-create-5nbk7" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.014794 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/377152ee-4878-4651-b7b8-3b3612bae8aa-operator-scripts\") pod \"placement-d75a-account-create-update-xbb66\" (UID: \"377152ee-4878-4651-b7b8-3b3612bae8aa\") " pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.015059 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-operator-scripts\") pod \"placement-db-create-5nbk7\" (UID: \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\") " pod="openstack/placement-db-create-5nbk7" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.030541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5md7z\" (UniqueName: \"kubernetes.io/projected/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-kube-api-access-5md7z\") pod \"placement-db-create-5nbk7\" (UID: \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\") " pod="openstack/placement-db-create-5nbk7" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.031644 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2m4f\" (UniqueName: \"kubernetes.io/projected/377152ee-4878-4651-b7b8-3b3612bae8aa-kube-api-access-d2m4f\") pod \"placement-d75a-account-create-update-xbb66\" (UID: \"377152ee-4878-4651-b7b8-3b3612bae8aa\") " pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.081261 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5nbk7" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.175795 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.236773 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tlfvt"] Feb 01 14:37:00 crc kubenswrapper[4820]: W0201 14:37:00.244904 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ef0e812_f6c4_4147_9437_163981354ea2.slice/crio-e6659981c4daeaf7918fc6e6f3b62f1d1e294886cd2ab459ab29d27e3d492b4f WatchSource:0}: Error finding container e6659981c4daeaf7918fc6e6f3b62f1d1e294886cd2ab459ab29d27e3d492b4f: Status 404 returned error can't find the container with id e6659981c4daeaf7918fc6e6f3b62f1d1e294886cd2ab459ab29d27e3d492b4f Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.377285 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6a2e-account-create-update-6zkcd"] Feb 01 14:37:00 crc kubenswrapper[4820]: W0201 14:37:00.394426 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda04eb126_50a3_44e6_9741_345441291daf.slice/crio-2daa2dee4c78f565be27c4475b2a41102f3d738b9903ef2c2715c0ba791e4ac1 WatchSource:0}: Error finding container 2daa2dee4c78f565be27c4475b2a41102f3d738b9903ef2c2715c0ba791e4ac1: Status 404 returned error can't find the container with id 2daa2dee4c78f565be27c4475b2a41102f3d738b9903ef2c2715c0ba791e4ac1 Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.405617 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ww8ln"] Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.406623 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.408688 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.408818 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gl8s8" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.424637 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ww8ln"] Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.425147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-db-sync-config-data\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.425245 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-config-data\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.425284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-combined-ca-bundle\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.425342 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstp6\" (UniqueName: \"kubernetes.io/projected/de2181c8-3297-4f4a-b21a-51a8d2172a9c-kube-api-access-bstp6\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.499293 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5nbk7"] Feb 01 14:37:00 crc kubenswrapper[4820]: W0201 14:37:00.513946 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ece3bc3_7bab_4cd0_8875_1757a5f0b12b.slice/crio-8f76f46ca190f310f7747eb9efa74cc4f3bb1bdf8dcb2247903941126ea084de WatchSource:0}: Error finding container 8f76f46ca190f310f7747eb9efa74cc4f3bb1bdf8dcb2247903941126ea084de: Status 404 returned error can't find the container with id 8f76f46ca190f310f7747eb9efa74cc4f3bb1bdf8dcb2247903941126ea084de Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.526735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-db-sync-config-data\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.526964 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-config-data\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.526986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-combined-ca-bundle\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.527099 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstp6\" (UniqueName: \"kubernetes.io/projected/de2181c8-3297-4f4a-b21a-51a8d2172a9c-kube-api-access-bstp6\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.533070 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-config-data\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.533082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-db-sync-config-data\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.533170 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-combined-ca-bundle\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.544402 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstp6\" (UniqueName: \"kubernetes.io/projected/de2181c8-3297-4f4a-b21a-51a8d2172a9c-kube-api-access-bstp6\") pod \"glance-db-sync-ww8ln\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.652268 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d75a-account-create-update-xbb66"] Feb 01 14:37:00 crc kubenswrapper[4820]: W0201 14:37:00.653589 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod377152ee_4878_4651_b7b8_3b3612bae8aa.slice/crio-04a6f06a8c388bc85232ab7019d6eaafa6486de56ab320ae6753332c32ab9bbe WatchSource:0}: Error finding container 04a6f06a8c388bc85232ab7019d6eaafa6486de56ab320ae6753332c32ab9bbe: Status 404 returned error can't find the container with id 04a6f06a8c388bc85232ab7019d6eaafa6486de56ab320ae6753332c32ab9bbe Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.730341 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.969190 4820 generic.go:334] "Generic (PLEG): container finished" podID="9ece3bc3-7bab-4cd0-8875-1757a5f0b12b" containerID="5e5271bdbdf6c39a476d3b11a54e3e70f92388f16c530762b9f6b5a97ca54581" exitCode=0 Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.969379 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5nbk7" event={"ID":"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b","Type":"ContainerDied","Data":"5e5271bdbdf6c39a476d3b11a54e3e70f92388f16c530762b9f6b5a97ca54581"} Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.969543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5nbk7" event={"ID":"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b","Type":"ContainerStarted","Data":"8f76f46ca190f310f7747eb9efa74cc4f3bb1bdf8dcb2247903941126ea084de"} Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.972400 4820 generic.go:334] "Generic (PLEG): container finished" podID="a04eb126-50a3-44e6-9741-345441291daf" containerID="a15949a1f0e4fa4a1f5d41f723c09121a502a96017e0c2bafb58d1a7e976b35f" exitCode=0 Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.972488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a2e-account-create-update-6zkcd" event={"ID":"a04eb126-50a3-44e6-9741-345441291daf","Type":"ContainerDied","Data":"a15949a1f0e4fa4a1f5d41f723c09121a502a96017e0c2bafb58d1a7e976b35f"} Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.972529 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a2e-account-create-update-6zkcd" event={"ID":"a04eb126-50a3-44e6-9741-345441291daf","Type":"ContainerStarted","Data":"2daa2dee4c78f565be27c4475b2a41102f3d738b9903ef2c2715c0ba791e4ac1"} Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.973691 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ef0e812-f6c4-4147-9437-163981354ea2" containerID="4323276296c26d16988080cc1d89d9934aacc9a5c31e81c2ef5c77d007b60528" exitCode=0 Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.973730 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tlfvt" event={"ID":"0ef0e812-f6c4-4147-9437-163981354ea2","Type":"ContainerDied","Data":"4323276296c26d16988080cc1d89d9934aacc9a5c31e81c2ef5c77d007b60528"} Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.973773 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tlfvt" event={"ID":"0ef0e812-f6c4-4147-9437-163981354ea2","Type":"ContainerStarted","Data":"e6659981c4daeaf7918fc6e6f3b62f1d1e294886cd2ab459ab29d27e3d492b4f"} Feb 01 14:37:00 crc kubenswrapper[4820]: I0201 14:37:00.975199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d75a-account-create-update-xbb66" event={"ID":"377152ee-4878-4651-b7b8-3b3612bae8aa","Type":"ContainerStarted","Data":"04a6f06a8c388bc85232ab7019d6eaafa6486de56ab320ae6753332c32ab9bbe"} Feb 01 14:37:01 crc kubenswrapper[4820]: I0201 14:37:01.272758 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ww8ln"] Feb 01 14:37:01 crc kubenswrapper[4820]: W0201 14:37:01.281963 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde2181c8_3297_4f4a_b21a_51a8d2172a9c.slice/crio-f8249e629a2659e7292ac2db9fda00bfdc736577414904203c0fae6c5fbb48bc WatchSource:0}: Error finding container f8249e629a2659e7292ac2db9fda00bfdc736577414904203c0fae6c5fbb48bc: Status 404 returned error can't find the container with id f8249e629a2659e7292ac2db9fda00bfdc736577414904203c0fae6c5fbb48bc Feb 01 14:37:01 crc kubenswrapper[4820]: I0201 14:37:01.987910 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ww8ln" event={"ID":"de2181c8-3297-4f4a-b21a-51a8d2172a9c","Type":"ContainerStarted","Data":"f8249e629a2659e7292ac2db9fda00bfdc736577414904203c0fae6c5fbb48bc"} Feb 01 14:37:01 crc kubenswrapper[4820]: I0201 14:37:01.990576 4820 generic.go:334] "Generic (PLEG): container finished" podID="377152ee-4878-4651-b7b8-3b3612bae8aa" containerID="da0352d44e4f9a6ddfb45c023114924718ae2ac92a19401fdd8548ed51e8ee7f" exitCode=0 Feb 01 14:37:01 crc kubenswrapper[4820]: I0201 14:37:01.990630 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d75a-account-create-update-xbb66" event={"ID":"377152ee-4878-4651-b7b8-3b3612bae8aa","Type":"ContainerDied","Data":"da0352d44e4f9a6ddfb45c023114924718ae2ac92a19401fdd8548ed51e8ee7f"} Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.401243 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tlfvt" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.454982 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0e812-f6c4-4147-9437-163981354ea2-operator-scripts\") pod \"0ef0e812-f6c4-4147-9437-163981354ea2\" (UID: \"0ef0e812-f6c4-4147-9437-163981354ea2\") " Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.455134 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6dv8\" (UniqueName: \"kubernetes.io/projected/0ef0e812-f6c4-4147-9437-163981354ea2-kube-api-access-x6dv8\") pod \"0ef0e812-f6c4-4147-9437-163981354ea2\" (UID: \"0ef0e812-f6c4-4147-9437-163981354ea2\") " Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.456536 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef0e812-f6c4-4147-9437-163981354ea2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ef0e812-f6c4-4147-9437-163981354ea2" (UID: "0ef0e812-f6c4-4147-9437-163981354ea2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.462044 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef0e812-f6c4-4147-9437-163981354ea2-kube-api-access-x6dv8" (OuterVolumeSpecName: "kube-api-access-x6dv8") pod "0ef0e812-f6c4-4147-9437-163981354ea2" (UID: "0ef0e812-f6c4-4147-9437-163981354ea2"). InnerVolumeSpecName "kube-api-access-x6dv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.524951 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5nbk7" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.532779 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.556731 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-operator-scripts\") pod \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\" (UID: \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\") " Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.556807 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04eb126-50a3-44e6-9741-345441291daf-operator-scripts\") pod \"a04eb126-50a3-44e6-9741-345441291daf\" (UID: \"a04eb126-50a3-44e6-9741-345441291daf\") " Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.556828 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5md7z\" (UniqueName: \"kubernetes.io/projected/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-kube-api-access-5md7z\") pod \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\" (UID: \"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b\") " Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.556853 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snp6g\" (UniqueName: \"kubernetes.io/projected/a04eb126-50a3-44e6-9741-345441291daf-kube-api-access-snp6g\") pod \"a04eb126-50a3-44e6-9741-345441291daf\" (UID: \"a04eb126-50a3-44e6-9741-345441291daf\") " Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.557139 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef0e812-f6c4-4147-9437-163981354ea2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.557152 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6dv8\" (UniqueName: \"kubernetes.io/projected/0ef0e812-f6c4-4147-9437-163981354ea2-kube-api-access-x6dv8\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.563240 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04eb126-50a3-44e6-9741-345441291daf-kube-api-access-snp6g" (OuterVolumeSpecName: "kube-api-access-snp6g") pod "a04eb126-50a3-44e6-9741-345441291daf" (UID: "a04eb126-50a3-44e6-9741-345441291daf"). InnerVolumeSpecName "kube-api-access-snp6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.563670 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ece3bc3-7bab-4cd0-8875-1757a5f0b12b" (UID: "9ece3bc3-7bab-4cd0-8875-1757a5f0b12b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.566453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-kube-api-access-5md7z" (OuterVolumeSpecName: "kube-api-access-5md7z") pod "9ece3bc3-7bab-4cd0-8875-1757a5f0b12b" (UID: "9ece3bc3-7bab-4cd0-8875-1757a5f0b12b"). InnerVolumeSpecName "kube-api-access-5md7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.568429 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04eb126-50a3-44e6-9741-345441291daf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a04eb126-50a3-44e6-9741-345441291daf" (UID: "a04eb126-50a3-44e6-9741-345441291daf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.659044 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04eb126-50a3-44e6-9741-345441291daf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.659084 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5md7z\" (UniqueName: \"kubernetes.io/projected/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-kube-api-access-5md7z\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.659100 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snp6g\" (UniqueName: \"kubernetes.io/projected/a04eb126-50a3-44e6-9741-345441291daf-kube-api-access-snp6g\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:02 crc kubenswrapper[4820]: I0201 14:37:02.659112 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.002143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tlfvt" event={"ID":"0ef0e812-f6c4-4147-9437-163981354ea2","Type":"ContainerDied","Data":"e6659981c4daeaf7918fc6e6f3b62f1d1e294886cd2ab459ab29d27e3d492b4f"} Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.003361 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6659981c4daeaf7918fc6e6f3b62f1d1e294886cd2ab459ab29d27e3d492b4f" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.002373 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tlfvt" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.003883 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5nbk7" event={"ID":"9ece3bc3-7bab-4cd0-8875-1757a5f0b12b","Type":"ContainerDied","Data":"8f76f46ca190f310f7747eb9efa74cc4f3bb1bdf8dcb2247903941126ea084de"} Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.003916 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f76f46ca190f310f7747eb9efa74cc4f3bb1bdf8dcb2247903941126ea084de" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.003949 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5nbk7" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.005914 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a2e-account-create-update-6zkcd" event={"ID":"a04eb126-50a3-44e6-9741-345441291daf","Type":"ContainerDied","Data":"2daa2dee4c78f565be27c4475b2a41102f3d738b9903ef2c2715c0ba791e4ac1"} Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.005940 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2daa2dee4c78f565be27c4475b2a41102f3d738b9903ef2c2715c0ba791e4ac1" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.005948 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a2e-account-create-update-6zkcd" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.180559 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-m499w"] Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.190719 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-m499w"] Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.207977 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ceffe48-7924-4f28-acbe-52ad3aa1c827" path="/var/lib/kubelet/pods/5ceffe48-7924-4f28-acbe-52ad3aa1c827/volumes" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.255473 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.275398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/377152ee-4878-4651-b7b8-3b3612bae8aa-operator-scripts\") pod \"377152ee-4878-4651-b7b8-3b3612bae8aa\" (UID: \"377152ee-4878-4651-b7b8-3b3612bae8aa\") " Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.275454 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2m4f\" (UniqueName: \"kubernetes.io/projected/377152ee-4878-4651-b7b8-3b3612bae8aa-kube-api-access-d2m4f\") pod \"377152ee-4878-4651-b7b8-3b3612bae8aa\" (UID: \"377152ee-4878-4651-b7b8-3b3612bae8aa\") " Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.276648 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377152ee-4878-4651-b7b8-3b3612bae8aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "377152ee-4878-4651-b7b8-3b3612bae8aa" (UID: "377152ee-4878-4651-b7b8-3b3612bae8aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.321048 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377152ee-4878-4651-b7b8-3b3612bae8aa-kube-api-access-d2m4f" (OuterVolumeSpecName: "kube-api-access-d2m4f") pod "377152ee-4878-4651-b7b8-3b3612bae8aa" (UID: "377152ee-4878-4651-b7b8-3b3612bae8aa"). InnerVolumeSpecName "kube-api-access-d2m4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.377274 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/377152ee-4878-4651-b7b8-3b3612bae8aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:03 crc kubenswrapper[4820]: I0201 14:37:03.377315 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2m4f\" (UniqueName: \"kubernetes.io/projected/377152ee-4878-4651-b7b8-3b3612bae8aa-kube-api-access-d2m4f\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:04 crc kubenswrapper[4820]: I0201 14:37:04.016335 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d75a-account-create-update-xbb66" event={"ID":"377152ee-4878-4651-b7b8-3b3612bae8aa","Type":"ContainerDied","Data":"04a6f06a8c388bc85232ab7019d6eaafa6486de56ab320ae6753332c32ab9bbe"} Feb 01 14:37:04 crc kubenswrapper[4820]: I0201 14:37:04.016377 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a6f06a8c388bc85232ab7019d6eaafa6486de56ab320ae6753332c32ab9bbe" Feb 01 14:37:04 crc kubenswrapper[4820]: I0201 14:37:04.016404 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d75a-account-create-update-xbb66" Feb 01 14:37:04 crc kubenswrapper[4820]: I0201 14:37:04.626187 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.191211 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ndtnv"] Feb 01 14:37:08 crc kubenswrapper[4820]: E0201 14:37:08.193647 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04eb126-50a3-44e6-9741-345441291daf" containerName="mariadb-account-create-update" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.193765 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04eb126-50a3-44e6-9741-345441291daf" containerName="mariadb-account-create-update" Feb 01 14:37:08 crc kubenswrapper[4820]: E0201 14:37:08.193856 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ece3bc3-7bab-4cd0-8875-1757a5f0b12b" containerName="mariadb-database-create" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.193956 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ece3bc3-7bab-4cd0-8875-1757a5f0b12b" containerName="mariadb-database-create" Feb 01 14:37:08 crc kubenswrapper[4820]: E0201 14:37:08.194039 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef0e812-f6c4-4147-9437-163981354ea2" containerName="mariadb-database-create" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.194119 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef0e812-f6c4-4147-9437-163981354ea2" containerName="mariadb-database-create" Feb 01 14:37:08 crc kubenswrapper[4820]: E0201 14:37:08.194206 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377152ee-4878-4651-b7b8-3b3612bae8aa" containerName="mariadb-account-create-update" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.194314 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="377152ee-4878-4651-b7b8-3b3612bae8aa" containerName="mariadb-account-create-update" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.194592 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef0e812-f6c4-4147-9437-163981354ea2" containerName="mariadb-database-create" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.194694 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ece3bc3-7bab-4cd0-8875-1757a5f0b12b" containerName="mariadb-database-create" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.194784 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="377152ee-4878-4651-b7b8-3b3612bae8aa" containerName="mariadb-account-create-update" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.194887 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04eb126-50a3-44e6-9741-345441291daf" containerName="mariadb-account-create-update" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.195641 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.198915 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.199975 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ndtnv"] Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.352847 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvkwq\" (UniqueName: \"kubernetes.io/projected/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-kube-api-access-kvkwq\") pod \"root-account-create-update-ndtnv\" (UID: \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\") " pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.352970 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-operator-scripts\") pod \"root-account-create-update-ndtnv\" (UID: \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\") " pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.454320 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvkwq\" (UniqueName: \"kubernetes.io/projected/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-kube-api-access-kvkwq\") pod \"root-account-create-update-ndtnv\" (UID: \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\") " pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.454394 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-operator-scripts\") pod \"root-account-create-update-ndtnv\" (UID: \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\") " pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.455168 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-operator-scripts\") pod \"root-account-create-update-ndtnv\" (UID: \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\") " pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.479664 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvkwq\" (UniqueName: \"kubernetes.io/projected/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-kube-api-access-kvkwq\") pod \"root-account-create-update-ndtnv\" (UID: \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\") " pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:08 crc kubenswrapper[4820]: I0201 14:37:08.514816 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.062790 4820 generic.go:334] "Generic (PLEG): container finished" podID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" containerID="8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d" exitCode=0 Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.062900 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e9cdc675-a849-4f24-bca1-ea5c04c55b52","Type":"ContainerDied","Data":"8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d"} Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.129528 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wrqhs" podUID="0badf713-2cde-439b-8a0e-c2eedac05b99" containerName="ovn-controller" probeResult="failure" output=< Feb 01 14:37:11 crc kubenswrapper[4820]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 01 14:37:11 crc kubenswrapper[4820]: > Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.140671 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.142867 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rqms2" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.352537 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wrqhs-config-f4sl5"] Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.354074 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.356821 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.363920 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wrqhs-config-f4sl5"] Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.500016 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-log-ovn\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.500439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxx5t\" (UniqueName: \"kubernetes.io/projected/e4369861-7976-4d4d-a505-7acd515f785d-kube-api-access-jxx5t\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.500490 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-additional-scripts\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.500508 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run-ovn\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.500579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.500683 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-scripts\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602525 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-scripts\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-log-ovn\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602621 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxx5t\" (UniqueName: \"kubernetes.io/projected/e4369861-7976-4d4d-a505-7acd515f785d-kube-api-access-jxx5t\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602654 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-additional-scripts\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run-ovn\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602942 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run-ovn\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602961 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.602989 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-log-ovn\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.603714 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-additional-scripts\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.604638 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-scripts\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.620750 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxx5t\" (UniqueName: \"kubernetes.io/projected/e4369861-7976-4d4d-a505-7acd515f785d-kube-api-access-jxx5t\") pod \"ovn-controller-wrqhs-config-f4sl5\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:11 crc kubenswrapper[4820]: I0201 14:37:11.677964 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:12 crc kubenswrapper[4820]: I0201 14:37:12.072055 4820 generic.go:334] "Generic (PLEG): container finished" podID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" containerID="27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13" exitCode=0 Feb 01 14:37:12 crc kubenswrapper[4820]: I0201 14:37:12.072132 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c49d65b-e444-406e-8b45-e95ba6bbb52b","Type":"ContainerDied","Data":"27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13"} Feb 01 14:37:14 crc kubenswrapper[4820]: I0201 14:37:14.277676 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wrqhs-config-f4sl5"] Feb 01 14:37:14 crc kubenswrapper[4820]: I0201 14:37:14.321907 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ndtnv"] Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.093770 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ww8ln" event={"ID":"de2181c8-3297-4f4a-b21a-51a8d2172a9c","Type":"ContainerStarted","Data":"d1b7e818fbb848716f060bc006c43e68390e8ed317ef57472f2a61ff7cdda78d"} Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.097664 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c49d65b-e444-406e-8b45-e95ba6bbb52b","Type":"ContainerStarted","Data":"75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12"} Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.098101 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.104982 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e9cdc675-a849-4f24-bca1-ea5c04c55b52","Type":"ContainerStarted","Data":"81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2"} Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.105203 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.108003 4820 generic.go:334] "Generic (PLEG): container finished" podID="e4369861-7976-4d4d-a505-7acd515f785d" containerID="3c26365a04db790a3339f11b707fd2e73ab790b3e5c3fbf036e66a3cdd763675" exitCode=0 Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.108128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wrqhs-config-f4sl5" event={"ID":"e4369861-7976-4d4d-a505-7acd515f785d","Type":"ContainerDied","Data":"3c26365a04db790a3339f11b707fd2e73ab790b3e5c3fbf036e66a3cdd763675"} Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.108159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wrqhs-config-f4sl5" event={"ID":"e4369861-7976-4d4d-a505-7acd515f785d","Type":"ContainerStarted","Data":"b1739f39b1a1384e46bc8a2f920cf8f7a64b6f22dbfd2e20a0d030df8ef84322"} Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.111429 4820 generic.go:334] "Generic (PLEG): container finished" podID="7610cca2-7bcc-4b2d-8c5f-df32dee24ff6" containerID="9ede13446cca7394b3e83552878c4917baf91a9f555f02ab0ee279d42e0be258" exitCode=0 Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.111471 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ndtnv" event={"ID":"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6","Type":"ContainerDied","Data":"9ede13446cca7394b3e83552878c4917baf91a9f555f02ab0ee279d42e0be258"} Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.111844 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ndtnv" event={"ID":"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6","Type":"ContainerStarted","Data":"c207100853405635a9847921d6181587d5debe114eb08f8cd999439d778ba69e"} Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.121257 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ww8ln" podStartSLOduration=2.491598277 podStartE2EDuration="15.121237866s" podCreationTimestamp="2026-02-01 14:37:00 +0000 UTC" firstStartedPulling="2026-02-01 14:37:01.289203149 +0000 UTC m=+962.809569433" lastFinishedPulling="2026-02-01 14:37:13.918842738 +0000 UTC m=+975.439209022" observedRunningTime="2026-02-01 14:37:15.119527974 +0000 UTC m=+976.639894258" watchObservedRunningTime="2026-02-01 14:37:15.121237866 +0000 UTC m=+976.641604150" Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.156270 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.049300884 podStartE2EDuration="1m0.156251079s" podCreationTimestamp="2026-02-01 14:36:15 +0000 UTC" firstStartedPulling="2026-02-01 14:36:28.87918288 +0000 UTC m=+930.399549164" lastFinishedPulling="2026-02-01 14:36:36.986133085 +0000 UTC m=+938.506499359" observedRunningTime="2026-02-01 14:37:15.14891628 +0000 UTC m=+976.669282584" watchObservedRunningTime="2026-02-01 14:37:15.156251079 +0000 UTC m=+976.676617363" Feb 01 14:37:15 crc kubenswrapper[4820]: I0201 14:37:15.195992 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.236906916 podStartE2EDuration="1m0.195974856s" podCreationTimestamp="2026-02-01 14:36:15 +0000 UTC" firstStartedPulling="2026-02-01 14:36:29.351049273 +0000 UTC m=+930.871415557" lastFinishedPulling="2026-02-01 14:36:37.310117213 +0000 UTC m=+938.830483497" observedRunningTime="2026-02-01 14:37:15.187684764 +0000 UTC m=+976.708051048" watchObservedRunningTime="2026-02-01 14:37:15.195974856 +0000 UTC m=+976.716341140" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.141182 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wrqhs" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.589542 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.593753 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.686969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run-ovn\") pod \"e4369861-7976-4d4d-a505-7acd515f785d\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687031 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e4369861-7976-4d4d-a505-7acd515f785d" (UID: "e4369861-7976-4d4d-a505-7acd515f785d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687078 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-log-ovn\") pod \"e4369861-7976-4d4d-a505-7acd515f785d\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687106 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-operator-scripts\") pod \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\" (UID: \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\") " Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run\") pod \"e4369861-7976-4d4d-a505-7acd515f785d\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687158 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e4369861-7976-4d4d-a505-7acd515f785d" (UID: "e4369861-7976-4d4d-a505-7acd515f785d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687166 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvkwq\" (UniqueName: \"kubernetes.io/projected/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-kube-api-access-kvkwq\") pod \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\" (UID: \"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6\") " Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687180 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run" (OuterVolumeSpecName: "var-run") pod "e4369861-7976-4d4d-a505-7acd515f785d" (UID: "e4369861-7976-4d4d-a505-7acd515f785d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687188 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-additional-scripts\") pod \"e4369861-7976-4d4d-a505-7acd515f785d\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxx5t\" (UniqueName: \"kubernetes.io/projected/e4369861-7976-4d4d-a505-7acd515f785d-kube-api-access-jxx5t\") pod \"e4369861-7976-4d4d-a505-7acd515f785d\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687297 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-scripts\") pod \"e4369861-7976-4d4d-a505-7acd515f785d\" (UID: \"e4369861-7976-4d4d-a505-7acd515f785d\") " Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.687646 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7610cca2-7bcc-4b2d-8c5f-df32dee24ff6" (UID: "7610cca2-7bcc-4b2d-8c5f-df32dee24ff6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.688212 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e4369861-7976-4d4d-a505-7acd515f785d" (UID: "e4369861-7976-4d4d-a505-7acd515f785d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.688424 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-scripts" (OuterVolumeSpecName: "scripts") pod "e4369861-7976-4d4d-a505-7acd515f785d" (UID: "e4369861-7976-4d4d-a505-7acd515f785d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.688661 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.688678 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.688687 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.688696 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.688705 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4369861-7976-4d4d-a505-7acd515f785d-var-run\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.688713 4820 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4369861-7976-4d4d-a505-7acd515f785d-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.692537 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-kube-api-access-kvkwq" (OuterVolumeSpecName: "kube-api-access-kvkwq") pod "7610cca2-7bcc-4b2d-8c5f-df32dee24ff6" (UID: "7610cca2-7bcc-4b2d-8c5f-df32dee24ff6"). InnerVolumeSpecName "kube-api-access-kvkwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.692580 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4369861-7976-4d4d-a505-7acd515f785d-kube-api-access-jxx5t" (OuterVolumeSpecName: "kube-api-access-jxx5t") pod "e4369861-7976-4d4d-a505-7acd515f785d" (UID: "e4369861-7976-4d4d-a505-7acd515f785d"). InnerVolumeSpecName "kube-api-access-jxx5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.789850 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvkwq\" (UniqueName: \"kubernetes.io/projected/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6-kube-api-access-kvkwq\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:16 crc kubenswrapper[4820]: I0201 14:37:16.789902 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxx5t\" (UniqueName: \"kubernetes.io/projected/e4369861-7976-4d4d-a505-7acd515f785d-kube-api-access-jxx5t\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.134330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wrqhs-config-f4sl5" event={"ID":"e4369861-7976-4d4d-a505-7acd515f785d","Type":"ContainerDied","Data":"b1739f39b1a1384e46bc8a2f920cf8f7a64b6f22dbfd2e20a0d030df8ef84322"} Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.134382 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs-config-f4sl5" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.134391 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1739f39b1a1384e46bc8a2f920cf8f7a64b6f22dbfd2e20a0d030df8ef84322" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.136451 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ndtnv" event={"ID":"7610cca2-7bcc-4b2d-8c5f-df32dee24ff6","Type":"ContainerDied","Data":"c207100853405635a9847921d6181587d5debe114eb08f8cd999439d778ba69e"} Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.136496 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c207100853405635a9847921d6181587d5debe114eb08f8cd999439d778ba69e" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.136563 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ndtnv" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.708564 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wrqhs-config-f4sl5"] Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.714110 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wrqhs-config-f4sl5"] Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.799909 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wrqhs-config-hvnbm"] Feb 01 14:37:17 crc kubenswrapper[4820]: E0201 14:37:17.800248 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4369861-7976-4d4d-a505-7acd515f785d" containerName="ovn-config" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.800274 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4369861-7976-4d4d-a505-7acd515f785d" containerName="ovn-config" Feb 01 14:37:17 crc kubenswrapper[4820]: E0201 14:37:17.800284 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7610cca2-7bcc-4b2d-8c5f-df32dee24ff6" containerName="mariadb-account-create-update" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.800290 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7610cca2-7bcc-4b2d-8c5f-df32dee24ff6" containerName="mariadb-account-create-update" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.800435 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7610cca2-7bcc-4b2d-8c5f-df32dee24ff6" containerName="mariadb-account-create-update" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.800459 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4369861-7976-4d4d-a505-7acd515f785d" containerName="ovn-config" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.800949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.802716 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.816944 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wrqhs-config-hvnbm"] Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.907118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-scripts\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.907191 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.907296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-additional-scripts\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.907339 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84wn\" (UniqueName: \"kubernetes.io/projected/a1cc85b0-f627-4e0f-b13f-10549d55ec08-kube-api-access-c84wn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.907457 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-log-ovn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:17 crc kubenswrapper[4820]: I0201 14:37:17.907484 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run-ovn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.009675 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-log-ovn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.009726 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run-ovn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.009796 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-scripts\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.009839 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.009866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-additional-scripts\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.009939 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84wn\" (UniqueName: \"kubernetes.io/projected/a1cc85b0-f627-4e0f-b13f-10549d55ec08-kube-api-access-c84wn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.010597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.010836 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-log-ovn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.010987 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run-ovn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.011142 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-additional-scripts\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.012640 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-scripts\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.056556 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84wn\" (UniqueName: \"kubernetes.io/projected/a1cc85b0-f627-4e0f-b13f-10549d55ec08-kube-api-access-c84wn\") pod \"ovn-controller-wrqhs-config-hvnbm\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.118598 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:18 crc kubenswrapper[4820]: I0201 14:37:18.580965 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wrqhs-config-hvnbm"] Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.160811 4820 generic.go:334] "Generic (PLEG): container finished" podID="a1cc85b0-f627-4e0f-b13f-10549d55ec08" containerID="b14cae70f0a2768f84f31bfce59ec59867d513125323ca28f2249db615b54355" exitCode=0 Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.160917 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wrqhs-config-hvnbm" event={"ID":"a1cc85b0-f627-4e0f-b13f-10549d55ec08","Type":"ContainerDied","Data":"b14cae70f0a2768f84f31bfce59ec59867d513125323ca28f2249db615b54355"} Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.161214 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wrqhs-config-hvnbm" event={"ID":"a1cc85b0-f627-4e0f-b13f-10549d55ec08","Type":"ContainerStarted","Data":"22ae85cbb2cae4f71910a9dab206960dfbac2387fb51ba208226670612c7ea17"} Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.206998 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4369861-7976-4d4d-a505-7acd515f785d" path="/var/lib/kubelet/pods/e4369861-7976-4d4d-a505-7acd515f785d/volumes" Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.242241 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.242310 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.242365 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.243155 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b1868d25809e0a94d683b5755d093ee9d0de6decf94341a6eb437233a52b5e1"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:37:19 crc kubenswrapper[4820]: I0201 14:37:19.243230 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://3b1868d25809e0a94d683b5755d093ee9d0de6decf94341a6eb437233a52b5e1" gracePeriod=600 Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.173250 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="3b1868d25809e0a94d683b5755d093ee9d0de6decf94341a6eb437233a52b5e1" exitCode=0 Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.173330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"3b1868d25809e0a94d683b5755d093ee9d0de6decf94341a6eb437233a52b5e1"} Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.173594 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"18339407f9a7bd299b3086430fb392c91bda2f44145a4ddfb153b9aef0bd2fe6"} Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.173636 4820 scope.go:117] "RemoveContainer" containerID="f42c80990978c64d79b08815b144c253d004fd2bc9ddecf8ee4b80f9fef30c27" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.462437 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547212 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-additional-scripts\") pod \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547272 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-log-ovn\") pod \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547385 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-scripts\") pod \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547406 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a1cc85b0-f627-4e0f-b13f-10549d55ec08" (UID: "a1cc85b0-f627-4e0f-b13f-10549d55ec08"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547454 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c84wn\" (UniqueName: \"kubernetes.io/projected/a1cc85b0-f627-4e0f-b13f-10549d55ec08-kube-api-access-c84wn\") pod \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547550 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run\") pod \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547586 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run-ovn\") pod \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\" (UID: \"a1cc85b0-f627-4e0f-b13f-10549d55ec08\") " Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run" (OuterVolumeSpecName: "var-run") pod "a1cc85b0-f627-4e0f-b13f-10549d55ec08" (UID: "a1cc85b0-f627-4e0f-b13f-10549d55ec08"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.547687 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a1cc85b0-f627-4e0f-b13f-10549d55ec08" (UID: "a1cc85b0-f627-4e0f-b13f-10549d55ec08"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.548011 4820 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.548035 4820 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.548046 4820 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a1cc85b0-f627-4e0f-b13f-10549d55ec08-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.548160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a1cc85b0-f627-4e0f-b13f-10549d55ec08" (UID: "a1cc85b0-f627-4e0f-b13f-10549d55ec08"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.548464 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-scripts" (OuterVolumeSpecName: "scripts") pod "a1cc85b0-f627-4e0f-b13f-10549d55ec08" (UID: "a1cc85b0-f627-4e0f-b13f-10549d55ec08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.552657 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1cc85b0-f627-4e0f-b13f-10549d55ec08-kube-api-access-c84wn" (OuterVolumeSpecName: "kube-api-access-c84wn") pod "a1cc85b0-f627-4e0f-b13f-10549d55ec08" (UID: "a1cc85b0-f627-4e0f-b13f-10549d55ec08"). InnerVolumeSpecName "kube-api-access-c84wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.650084 4820 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.650136 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1cc85b0-f627-4e0f-b13f-10549d55ec08-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:20 crc kubenswrapper[4820]: I0201 14:37:20.650148 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c84wn\" (UniqueName: \"kubernetes.io/projected/a1cc85b0-f627-4e0f-b13f-10549d55ec08-kube-api-access-c84wn\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:21 crc kubenswrapper[4820]: I0201 14:37:21.180716 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wrqhs-config-hvnbm" event={"ID":"a1cc85b0-f627-4e0f-b13f-10549d55ec08","Type":"ContainerDied","Data":"22ae85cbb2cae4f71910a9dab206960dfbac2387fb51ba208226670612c7ea17"} Feb 01 14:37:21 crc kubenswrapper[4820]: I0201 14:37:21.181111 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ae85cbb2cae4f71910a9dab206960dfbac2387fb51ba208226670612c7ea17" Feb 01 14:37:21 crc kubenswrapper[4820]: I0201 14:37:21.180790 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wrqhs-config-hvnbm" Feb 01 14:37:21 crc kubenswrapper[4820]: I0201 14:37:21.546170 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wrqhs-config-hvnbm"] Feb 01 14:37:21 crc kubenswrapper[4820]: I0201 14:37:21.554417 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wrqhs-config-hvnbm"] Feb 01 14:37:23 crc kubenswrapper[4820]: I0201 14:37:23.198683 4820 generic.go:334] "Generic (PLEG): container finished" podID="de2181c8-3297-4f4a-b21a-51a8d2172a9c" containerID="d1b7e818fbb848716f060bc006c43e68390e8ed317ef57472f2a61ff7cdda78d" exitCode=0 Feb 01 14:37:23 crc kubenswrapper[4820]: I0201 14:37:23.208096 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1cc85b0-f627-4e0f-b13f-10549d55ec08" path="/var/lib/kubelet/pods/a1cc85b0-f627-4e0f-b13f-10549d55ec08/volumes" Feb 01 14:37:23 crc kubenswrapper[4820]: I0201 14:37:23.208705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ww8ln" event={"ID":"de2181c8-3297-4f4a-b21a-51a8d2172a9c","Type":"ContainerDied","Data":"d1b7e818fbb848716f060bc006c43e68390e8ed317ef57472f2a61ff7cdda78d"} Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.694902 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.820688 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-config-data\") pod \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.820851 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-db-sync-config-data\") pod \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.820967 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-combined-ca-bundle\") pod \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.821037 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstp6\" (UniqueName: \"kubernetes.io/projected/de2181c8-3297-4f4a-b21a-51a8d2172a9c-kube-api-access-bstp6\") pod \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\" (UID: \"de2181c8-3297-4f4a-b21a-51a8d2172a9c\") " Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.826780 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2181c8-3297-4f4a-b21a-51a8d2172a9c-kube-api-access-bstp6" (OuterVolumeSpecName: "kube-api-access-bstp6") pod "de2181c8-3297-4f4a-b21a-51a8d2172a9c" (UID: "de2181c8-3297-4f4a-b21a-51a8d2172a9c"). InnerVolumeSpecName "kube-api-access-bstp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.826777 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "de2181c8-3297-4f4a-b21a-51a8d2172a9c" (UID: "de2181c8-3297-4f4a-b21a-51a8d2172a9c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.853896 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de2181c8-3297-4f4a-b21a-51a8d2172a9c" (UID: "de2181c8-3297-4f4a-b21a-51a8d2172a9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.867217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-config-data" (OuterVolumeSpecName: "config-data") pod "de2181c8-3297-4f4a-b21a-51a8d2172a9c" (UID: "de2181c8-3297-4f4a-b21a-51a8d2172a9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.923462 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.923507 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.923524 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2181c8-3297-4f4a-b21a-51a8d2172a9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:24 crc kubenswrapper[4820]: I0201 14:37:24.923536 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bstp6\" (UniqueName: \"kubernetes.io/projected/de2181c8-3297-4f4a-b21a-51a8d2172a9c-kube-api-access-bstp6\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.217299 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ww8ln" event={"ID":"de2181c8-3297-4f4a-b21a-51a8d2172a9c","Type":"ContainerDied","Data":"f8249e629a2659e7292ac2db9fda00bfdc736577414904203c0fae6c5fbb48bc"} Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.217341 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ww8ln" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.217343 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8249e629a2659e7292ac2db9fda00bfdc736577414904203c0fae6c5fbb48bc" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.627281 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-zqlvd"] Feb 01 14:37:25 crc kubenswrapper[4820]: E0201 14:37:25.627636 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2181c8-3297-4f4a-b21a-51a8d2172a9c" containerName="glance-db-sync" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.627648 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2181c8-3297-4f4a-b21a-51a8d2172a9c" containerName="glance-db-sync" Feb 01 14:37:25 crc kubenswrapper[4820]: E0201 14:37:25.627657 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1cc85b0-f627-4e0f-b13f-10549d55ec08" containerName="ovn-config" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.627662 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1cc85b0-f627-4e0f-b13f-10549d55ec08" containerName="ovn-config" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.627806 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2181c8-3297-4f4a-b21a-51a8d2172a9c" containerName="glance-db-sync" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.627824 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1cc85b0-f627-4e0f-b13f-10549d55ec08" containerName="ovn-config" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.628618 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.638526 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-zqlvd"] Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.734845 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.735292 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.735331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-config\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.735369 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59q7x\" (UniqueName: \"kubernetes.io/projected/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-kube-api-access-59q7x\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.735451 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-dns-svc\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.837055 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.837104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-config\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.837124 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59q7x\" (UniqueName: \"kubernetes.io/projected/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-kube-api-access-59q7x\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.837149 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-dns-svc\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.837177 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.838081 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.838122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-dns-svc\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.838151 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.838190 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-config\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.854758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59q7x\" (UniqueName: \"kubernetes.io/projected/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-kube-api-access-59q7x\") pod \"dnsmasq-dns-554567b4f7-zqlvd\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:25 crc kubenswrapper[4820]: I0201 14:37:25.957093 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.385748 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-zqlvd"] Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.511486 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.774231 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xl7k9"] Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.776009 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.796076 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xl7k9"] Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.814000 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.853610 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d45963d5-f579-41a2-81d5-399be2d3ff53-operator-scripts\") pod \"barbican-db-create-xl7k9\" (UID: \"d45963d5-f579-41a2-81d5-399be2d3ff53\") " pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.853783 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzwpz\" (UniqueName: \"kubernetes.io/projected/d45963d5-f579-41a2-81d5-399be2d3ff53-kube-api-access-pzwpz\") pod \"barbican-db-create-xl7k9\" (UID: \"d45963d5-f579-41a2-81d5-399be2d3ff53\") " pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.908311 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-dda1-account-create-update-lfbkq"] Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.909337 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.911024 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.916816 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-dda1-account-create-update-lfbkq"] Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.956342 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5217c856-fc01-4221-a551-63e793f60558-operator-scripts\") pod \"barbican-dda1-account-create-update-lfbkq\" (UID: \"5217c856-fc01-4221-a551-63e793f60558\") " pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.956568 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d45963d5-f579-41a2-81d5-399be2d3ff53-operator-scripts\") pod \"barbican-db-create-xl7k9\" (UID: \"d45963d5-f579-41a2-81d5-399be2d3ff53\") " pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.956611 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2mml\" (UniqueName: \"kubernetes.io/projected/5217c856-fc01-4221-a551-63e793f60558-kube-api-access-q2mml\") pod \"barbican-dda1-account-create-update-lfbkq\" (UID: \"5217c856-fc01-4221-a551-63e793f60558\") " pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.956737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzwpz\" (UniqueName: \"kubernetes.io/projected/d45963d5-f579-41a2-81d5-399be2d3ff53-kube-api-access-pzwpz\") pod \"barbican-db-create-xl7k9\" (UID: \"d45963d5-f579-41a2-81d5-399be2d3ff53\") " pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.957200 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d45963d5-f579-41a2-81d5-399be2d3ff53-operator-scripts\") pod \"barbican-db-create-xl7k9\" (UID: \"d45963d5-f579-41a2-81d5-399be2d3ff53\") " pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.981834 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-bdnlv"] Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.982996 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:26 crc kubenswrapper[4820]: I0201 14:37:26.993509 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzwpz\" (UniqueName: \"kubernetes.io/projected/d45963d5-f579-41a2-81d5-399be2d3ff53-kube-api-access-pzwpz\") pod \"barbican-db-create-xl7k9\" (UID: \"d45963d5-f579-41a2-81d5-399be2d3ff53\") " pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.005723 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bdnlv"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.058244 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2mml\" (UniqueName: \"kubernetes.io/projected/5217c856-fc01-4221-a551-63e793f60558-kube-api-access-q2mml\") pod \"barbican-dda1-account-create-update-lfbkq\" (UID: \"5217c856-fc01-4221-a551-63e793f60558\") " pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.058307 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwbt\" (UniqueName: \"kubernetes.io/projected/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-kube-api-access-9rwbt\") pod \"cinder-db-create-bdnlv\" (UID: \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\") " pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.058368 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5217c856-fc01-4221-a551-63e793f60558-operator-scripts\") pod \"barbican-dda1-account-create-update-lfbkq\" (UID: \"5217c856-fc01-4221-a551-63e793f60558\") " pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.059085 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5217c856-fc01-4221-a551-63e793f60558-operator-scripts\") pod \"barbican-dda1-account-create-update-lfbkq\" (UID: \"5217c856-fc01-4221-a551-63e793f60558\") " pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.059137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-operator-scripts\") pod \"cinder-db-create-bdnlv\" (UID: \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\") " pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.091390 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2mml\" (UniqueName: \"kubernetes.io/projected/5217c856-fc01-4221-a551-63e793f60558-kube-api-access-q2mml\") pod \"barbican-dda1-account-create-update-lfbkq\" (UID: \"5217c856-fc01-4221-a551-63e793f60558\") " pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.095633 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2f44-account-create-update-fsm6g"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.096584 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.099016 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.107544 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.118699 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2f44-account-create-update-fsm6g"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.163001 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rwbt\" (UniqueName: \"kubernetes.io/projected/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-kube-api-access-9rwbt\") pod \"cinder-db-create-bdnlv\" (UID: \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\") " pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.163097 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vh5p\" (UniqueName: \"kubernetes.io/projected/db77c675-16f7-46c6-bb7a-6aa18e492772-kube-api-access-9vh5p\") pod \"cinder-2f44-account-create-update-fsm6g\" (UID: \"db77c675-16f7-46c6-bb7a-6aa18e492772\") " pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.163145 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db77c675-16f7-46c6-bb7a-6aa18e492772-operator-scripts\") pod \"cinder-2f44-account-create-update-fsm6g\" (UID: \"db77c675-16f7-46c6-bb7a-6aa18e492772\") " pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.163177 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-operator-scripts\") pod \"cinder-db-create-bdnlv\" (UID: \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\") " pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.163963 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-operator-scripts\") pod \"cinder-db-create-bdnlv\" (UID: \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\") " pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.189256 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rwbt\" (UniqueName: \"kubernetes.io/projected/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-kube-api-access-9rwbt\") pod \"cinder-db-create-bdnlv\" (UID: \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\") " pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.228884 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.241627 4820 generic.go:334] "Generic (PLEG): container finished" podID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerID="2ad12fc0e79ea051659042da81c0b014dfba2c0715ad84ce42a9eb0d8612dab2" exitCode=0 Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.241676 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" event={"ID":"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0","Type":"ContainerDied","Data":"2ad12fc0e79ea051659042da81c0b014dfba2c0715ad84ce42a9eb0d8612dab2"} Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.241705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" event={"ID":"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0","Type":"ContainerStarted","Data":"e1c37c7ba35230d86642f09278104a89363c5ad086926688b2fcfb8a22905e48"} Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.267780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vh5p\" (UniqueName: \"kubernetes.io/projected/db77c675-16f7-46c6-bb7a-6aa18e492772-kube-api-access-9vh5p\") pod \"cinder-2f44-account-create-update-fsm6g\" (UID: \"db77c675-16f7-46c6-bb7a-6aa18e492772\") " pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.267926 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db77c675-16f7-46c6-bb7a-6aa18e492772-operator-scripts\") pod \"cinder-2f44-account-create-update-fsm6g\" (UID: \"db77c675-16f7-46c6-bb7a-6aa18e492772\") " pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.270089 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db77c675-16f7-46c6-bb7a-6aa18e492772-operator-scripts\") pod \"cinder-2f44-account-create-update-fsm6g\" (UID: \"db77c675-16f7-46c6-bb7a-6aa18e492772\") " pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.304114 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b3a6-account-create-update-t7bv2"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.305069 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.307364 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.308453 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vh5p\" (UniqueName: \"kubernetes.io/projected/db77c675-16f7-46c6-bb7a-6aa18e492772-kube-api-access-9vh5p\") pod \"cinder-2f44-account-create-update-fsm6g\" (UID: \"db77c675-16f7-46c6-bb7a-6aa18e492772\") " pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.325536 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.334445 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xzjhl"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.335365 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.361706 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b3a6-account-create-update-t7bv2"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.369217 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa83c1db-779c-4e77-8def-c9acf6560e6f-operator-scripts\") pod \"neutron-db-create-xzjhl\" (UID: \"fa83c1db-779c-4e77-8def-c9acf6560e6f\") " pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.369282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35ffca4-b990-4cf7-ac82-57b7f200ef24-operator-scripts\") pod \"neutron-b3a6-account-create-update-t7bv2\" (UID: \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\") " pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.369386 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mx9w\" (UniqueName: \"kubernetes.io/projected/fa83c1db-779c-4e77-8def-c9acf6560e6f-kube-api-access-4mx9w\") pod \"neutron-db-create-xzjhl\" (UID: \"fa83c1db-779c-4e77-8def-c9acf6560e6f\") " pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.369475 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlxm\" (UniqueName: \"kubernetes.io/projected/b35ffca4-b990-4cf7-ac82-57b7f200ef24-kube-api-access-2rlxm\") pod \"neutron-b3a6-account-create-update-t7bv2\" (UID: \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\") " pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.398832 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xzjhl"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.449972 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gm5js"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.451232 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.457750 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gm5js"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.458298 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.458511 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.458659 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.458896 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9xht9" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.478373 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35ffca4-b990-4cf7-ac82-57b7f200ef24-operator-scripts\") pod \"neutron-b3a6-account-create-update-t7bv2\" (UID: \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\") " pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.478843 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mx9w\" (UniqueName: \"kubernetes.io/projected/fa83c1db-779c-4e77-8def-c9acf6560e6f-kube-api-access-4mx9w\") pod \"neutron-db-create-xzjhl\" (UID: \"fa83c1db-779c-4e77-8def-c9acf6560e6f\") " pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.479018 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35ffca4-b990-4cf7-ac82-57b7f200ef24-operator-scripts\") pod \"neutron-b3a6-account-create-update-t7bv2\" (UID: \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\") " pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.479042 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rlxm\" (UniqueName: \"kubernetes.io/projected/b35ffca4-b990-4cf7-ac82-57b7f200ef24-kube-api-access-2rlxm\") pod \"neutron-b3a6-account-create-update-t7bv2\" (UID: \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\") " pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.479136 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa83c1db-779c-4e77-8def-c9acf6560e6f-operator-scripts\") pod \"neutron-db-create-xzjhl\" (UID: \"fa83c1db-779c-4e77-8def-c9acf6560e6f\") " pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.480616 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa83c1db-779c-4e77-8def-c9acf6560e6f-operator-scripts\") pod \"neutron-db-create-xzjhl\" (UID: \"fa83c1db-779c-4e77-8def-c9acf6560e6f\") " pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.502308 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.504281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rlxm\" (UniqueName: \"kubernetes.io/projected/b35ffca4-b990-4cf7-ac82-57b7f200ef24-kube-api-access-2rlxm\") pod \"neutron-b3a6-account-create-update-t7bv2\" (UID: \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\") " pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.504709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mx9w\" (UniqueName: \"kubernetes.io/projected/fa83c1db-779c-4e77-8def-c9acf6560e6f-kube-api-access-4mx9w\") pod \"neutron-db-create-xzjhl\" (UID: \"fa83c1db-779c-4e77-8def-c9acf6560e6f\") " pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.581998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mz8\" (UniqueName: \"kubernetes.io/projected/5488a765-644a-47bd-9665-afb4d8bdb6ea-kube-api-access-d9mz8\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.582282 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-combined-ca-bundle\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.582465 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-config-data\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.627482 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.635119 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xl7k9"] Feb 01 14:37:27 crc kubenswrapper[4820]: W0201 14:37:27.653059 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd45963d5_f579_41a2_81d5_399be2d3ff53.slice/crio-25b999827bd379db554a9c6c8f493d5b68be734d1b63b2601912e7478df23639 WatchSource:0}: Error finding container 25b999827bd379db554a9c6c8f493d5b68be734d1b63b2601912e7478df23639: Status 404 returned error can't find the container with id 25b999827bd379db554a9c6c8f493d5b68be734d1b63b2601912e7478df23639 Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.669673 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.685664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mz8\" (UniqueName: \"kubernetes.io/projected/5488a765-644a-47bd-9665-afb4d8bdb6ea-kube-api-access-d9mz8\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.686722 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-combined-ca-bundle\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.686795 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-config-data\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.692909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-combined-ca-bundle\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.695035 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-config-data\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.707969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mz8\" (UniqueName: \"kubernetes.io/projected/5488a765-644a-47bd-9665-afb4d8bdb6ea-kube-api-access-d9mz8\") pod \"keystone-db-sync-gm5js\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.786735 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-dda1-account-create-update-lfbkq"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.791669 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.855221 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bdnlv"] Feb 01 14:37:27 crc kubenswrapper[4820]: I0201 14:37:27.979622 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b3a6-account-create-update-t7bv2"] Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.011543 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2f44-account-create-update-fsm6g"] Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.152679 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gm5js"] Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.252220 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xzjhl"] Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.263485 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b3a6-account-create-update-t7bv2" event={"ID":"b35ffca4-b990-4cf7-ac82-57b7f200ef24","Type":"ContainerStarted","Data":"b0fe217880eb282eac5f57355118e154e568d9a3d2f87fbd45acb1109b99623c"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.270558 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dda1-account-create-update-lfbkq" event={"ID":"5217c856-fc01-4221-a551-63e793f60558","Type":"ContainerStarted","Data":"c1d557d577c9fe18f447431eca536ae30f9de1d5a7f254ed62c982e65c427bb5"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.270600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dda1-account-create-update-lfbkq" event={"ID":"5217c856-fc01-4221-a551-63e793f60558","Type":"ContainerStarted","Data":"ca3e71a69969c653e98114579b03061e73e68d13724f1ee47dc9db65ee2aa8ce"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.281659 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2f44-account-create-update-fsm6g" event={"ID":"db77c675-16f7-46c6-bb7a-6aa18e492772","Type":"ContainerStarted","Data":"a321e01de97af07ba2d0f329e0676c4aa4082b87dc467f49df8255e665390b95"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.298080 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-dda1-account-create-update-lfbkq" podStartSLOduration=2.2980610710000002 podStartE2EDuration="2.298061071s" podCreationTimestamp="2026-02-01 14:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:37:28.286289774 +0000 UTC m=+989.806656058" watchObservedRunningTime="2026-02-01 14:37:28.298061071 +0000 UTC m=+989.818427355" Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.298440 4820 generic.go:334] "Generic (PLEG): container finished" podID="d45963d5-f579-41a2-81d5-399be2d3ff53" containerID="d38db8cb1c9eec263e78781fb8f1be799fa99d500537c7cfae96cfda8f636561" exitCode=0 Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.298567 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xl7k9" event={"ID":"d45963d5-f579-41a2-81d5-399be2d3ff53","Type":"ContainerDied","Data":"d38db8cb1c9eec263e78781fb8f1be799fa99d500537c7cfae96cfda8f636561"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.298600 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xl7k9" event={"ID":"d45963d5-f579-41a2-81d5-399be2d3ff53","Type":"ContainerStarted","Data":"25b999827bd379db554a9c6c8f493d5b68be734d1b63b2601912e7478df23639"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.304528 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" event={"ID":"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0","Type":"ContainerStarted","Data":"c124a27f1689873448dd4e87ad65b2d239e480812a13b18581e29590db2d5e75"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.305341 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.311127 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gm5js" event={"ID":"5488a765-644a-47bd-9665-afb4d8bdb6ea","Type":"ContainerStarted","Data":"e2d84835f6aaaca56010c4e8c1f750b6e710853835b6977fae61f7f138816138"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.315611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bdnlv" event={"ID":"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19","Type":"ContainerStarted","Data":"7f1b9b5d1c27be0d153c62d77dd01e5d75c40b404fd991b695381835eb5f6882"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.315653 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bdnlv" event={"ID":"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19","Type":"ContainerStarted","Data":"208caf3b0cd58880abc08104a513d70047dd3dbba51998ada2248ac715a99c40"} Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.349233 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-bdnlv" podStartSLOduration=2.349215116 podStartE2EDuration="2.349215116s" podCreationTimestamp="2026-02-01 14:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:37:28.34033261 +0000 UTC m=+989.860698894" watchObservedRunningTime="2026-02-01 14:37:28.349215116 +0000 UTC m=+989.869581400" Feb 01 14:37:28 crc kubenswrapper[4820]: I0201 14:37:28.364854 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" podStartSLOduration=3.364828976 podStartE2EDuration="3.364828976s" podCreationTimestamp="2026-02-01 14:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:37:28.357547499 +0000 UTC m=+989.877913783" watchObservedRunningTime="2026-02-01 14:37:28.364828976 +0000 UTC m=+989.885195260" Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.346309 4820 generic.go:334] "Generic (PLEG): container finished" podID="b35ffca4-b990-4cf7-ac82-57b7f200ef24" containerID="08d3c20305ac64a46fa6fb896129520c756141077db4cb2252bcae5497154021" exitCode=0 Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.346679 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b3a6-account-create-update-t7bv2" event={"ID":"b35ffca4-b990-4cf7-ac82-57b7f200ef24","Type":"ContainerDied","Data":"08d3c20305ac64a46fa6fb896129520c756141077db4cb2252bcae5497154021"} Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.360149 4820 generic.go:334] "Generic (PLEG): container finished" podID="5217c856-fc01-4221-a551-63e793f60558" containerID="c1d557d577c9fe18f447431eca536ae30f9de1d5a7f254ed62c982e65c427bb5" exitCode=0 Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.360334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dda1-account-create-update-lfbkq" event={"ID":"5217c856-fc01-4221-a551-63e793f60558","Type":"ContainerDied","Data":"c1d557d577c9fe18f447431eca536ae30f9de1d5a7f254ed62c982e65c427bb5"} Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.389523 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa83c1db-779c-4e77-8def-c9acf6560e6f" containerID="73662602e50d54ebfc50b493bbc18a901f4c698f9ae8ce3a2c411f4cea2fd5a1" exitCode=0 Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.389932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzjhl" event={"ID":"fa83c1db-779c-4e77-8def-c9acf6560e6f","Type":"ContainerDied","Data":"73662602e50d54ebfc50b493bbc18a901f4c698f9ae8ce3a2c411f4cea2fd5a1"} Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.390011 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzjhl" event={"ID":"fa83c1db-779c-4e77-8def-c9acf6560e6f","Type":"ContainerStarted","Data":"af335e0f70b4d0aa5bb8fe588cc183b464f1c59401fcac3de0db55a129d9c11e"} Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.392468 4820 generic.go:334] "Generic (PLEG): container finished" podID="db77c675-16f7-46c6-bb7a-6aa18e492772" containerID="2ca19a73e4632511bdc1770de5ef889951ca5c50b306b3b5bba53d03d31d72a5" exitCode=0 Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.392635 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2f44-account-create-update-fsm6g" event={"ID":"db77c675-16f7-46c6-bb7a-6aa18e492772","Type":"ContainerDied","Data":"2ca19a73e4632511bdc1770de5ef889951ca5c50b306b3b5bba53d03d31d72a5"} Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.395265 4820 generic.go:334] "Generic (PLEG): container finished" podID="905d71d8-55a5-42a9-a22c-bfc3c3eb3e19" containerID="7f1b9b5d1c27be0d153c62d77dd01e5d75c40b404fd991b695381835eb5f6882" exitCode=0 Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.395384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bdnlv" event={"ID":"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19","Type":"ContainerDied","Data":"7f1b9b5d1c27be0d153c62d77dd01e5d75c40b404fd991b695381835eb5f6882"} Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.860380 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.956512 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d45963d5-f579-41a2-81d5-399be2d3ff53-operator-scripts\") pod \"d45963d5-f579-41a2-81d5-399be2d3ff53\" (UID: \"d45963d5-f579-41a2-81d5-399be2d3ff53\") " Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.956974 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzwpz\" (UniqueName: \"kubernetes.io/projected/d45963d5-f579-41a2-81d5-399be2d3ff53-kube-api-access-pzwpz\") pod \"d45963d5-f579-41a2-81d5-399be2d3ff53\" (UID: \"d45963d5-f579-41a2-81d5-399be2d3ff53\") " Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.957949 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45963d5-f579-41a2-81d5-399be2d3ff53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d45963d5-f579-41a2-81d5-399be2d3ff53" (UID: "d45963d5-f579-41a2-81d5-399be2d3ff53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.958161 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d45963d5-f579-41a2-81d5-399be2d3ff53-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:29 crc kubenswrapper[4820]: I0201 14:37:29.978399 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45963d5-f579-41a2-81d5-399be2d3ff53-kube-api-access-pzwpz" (OuterVolumeSpecName: "kube-api-access-pzwpz") pod "d45963d5-f579-41a2-81d5-399be2d3ff53" (UID: "d45963d5-f579-41a2-81d5-399be2d3ff53"). InnerVolumeSpecName "kube-api-access-pzwpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:30 crc kubenswrapper[4820]: I0201 14:37:30.060056 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzwpz\" (UniqueName: \"kubernetes.io/projected/d45963d5-f579-41a2-81d5-399be2d3ff53-kube-api-access-pzwpz\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:30 crc kubenswrapper[4820]: I0201 14:37:30.404933 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xl7k9" Feb 01 14:37:30 crc kubenswrapper[4820]: I0201 14:37:30.405067 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xl7k9" event={"ID":"d45963d5-f579-41a2-81d5-399be2d3ff53","Type":"ContainerDied","Data":"25b999827bd379db554a9c6c8f493d5b68be734d1b63b2601912e7478df23639"} Feb 01 14:37:30 crc kubenswrapper[4820]: I0201 14:37:30.406123 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25b999827bd379db554a9c6c8f493d5b68be734d1b63b2601912e7478df23639" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.370930 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.376048 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.382667 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.402833 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.413429 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.443005 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2f44-account-create-update-fsm6g" event={"ID":"db77c675-16f7-46c6-bb7a-6aa18e492772","Type":"ContainerDied","Data":"a321e01de97af07ba2d0f329e0676c4aa4082b87dc467f49df8255e665390b95"} Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.443048 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a321e01de97af07ba2d0f329e0676c4aa4082b87dc467f49df8255e665390b95" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.443125 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2f44-account-create-update-fsm6g" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.446480 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bdnlv" event={"ID":"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19","Type":"ContainerDied","Data":"208caf3b0cd58880abc08104a513d70047dd3dbba51998ada2248ac715a99c40"} Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.446506 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208caf3b0cd58880abc08104a513d70047dd3dbba51998ada2248ac715a99c40" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.446542 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bdnlv" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.450439 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b3a6-account-create-update-t7bv2" event={"ID":"b35ffca4-b990-4cf7-ac82-57b7f200ef24","Type":"ContainerDied","Data":"b0fe217880eb282eac5f57355118e154e568d9a3d2f87fbd45acb1109b99623c"} Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.450485 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0fe217880eb282eac5f57355118e154e568d9a3d2f87fbd45acb1109b99623c" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.450493 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b3a6-account-create-update-t7bv2" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.451475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-dda1-account-create-update-lfbkq" event={"ID":"5217c856-fc01-4221-a551-63e793f60558","Type":"ContainerDied","Data":"ca3e71a69969c653e98114579b03061e73e68d13724f1ee47dc9db65ee2aa8ce"} Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.451506 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca3e71a69969c653e98114579b03061e73e68d13724f1ee47dc9db65ee2aa8ce" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.451543 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-dda1-account-create-update-lfbkq" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.453202 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzjhl" event={"ID":"fa83c1db-779c-4e77-8def-c9acf6560e6f","Type":"ContainerDied","Data":"af335e0f70b4d0aa5bb8fe588cc183b464f1c59401fcac3de0db55a129d9c11e"} Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.453222 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af335e0f70b4d0aa5bb8fe588cc183b464f1c59401fcac3de0db55a129d9c11e" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.455726 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzjhl" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.515014 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-operator-scripts\") pod \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\" (UID: \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.515096 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db77c675-16f7-46c6-bb7a-6aa18e492772-operator-scripts\") pod \"db77c675-16f7-46c6-bb7a-6aa18e492772\" (UID: \"db77c675-16f7-46c6-bb7a-6aa18e492772\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.515179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35ffca4-b990-4cf7-ac82-57b7f200ef24-operator-scripts\") pod \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\" (UID: \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.515212 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rlxm\" (UniqueName: \"kubernetes.io/projected/b35ffca4-b990-4cf7-ac82-57b7f200ef24-kube-api-access-2rlxm\") pod \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\" (UID: \"b35ffca4-b990-4cf7-ac82-57b7f200ef24\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.515262 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rwbt\" (UniqueName: \"kubernetes.io/projected/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-kube-api-access-9rwbt\") pod \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\" (UID: \"905d71d8-55a5-42a9-a22c-bfc3c3eb3e19\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.515377 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa83c1db-779c-4e77-8def-c9acf6560e6f-operator-scripts\") pod \"fa83c1db-779c-4e77-8def-c9acf6560e6f\" (UID: \"fa83c1db-779c-4e77-8def-c9acf6560e6f\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.516143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa83c1db-779c-4e77-8def-c9acf6560e6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa83c1db-779c-4e77-8def-c9acf6560e6f" (UID: "fa83c1db-779c-4e77-8def-c9acf6560e6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.515474 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5217c856-fc01-4221-a551-63e793f60558-operator-scripts\") pod \"5217c856-fc01-4221-a551-63e793f60558\" (UID: \"5217c856-fc01-4221-a551-63e793f60558\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.516239 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db77c675-16f7-46c6-bb7a-6aa18e492772-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db77c675-16f7-46c6-bb7a-6aa18e492772" (UID: "db77c675-16f7-46c6-bb7a-6aa18e492772"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.516255 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "905d71d8-55a5-42a9-a22c-bfc3c3eb3e19" (UID: "905d71d8-55a5-42a9-a22c-bfc3c3eb3e19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.516274 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b35ffca4-b990-4cf7-ac82-57b7f200ef24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b35ffca4-b990-4cf7-ac82-57b7f200ef24" (UID: "b35ffca4-b990-4cf7-ac82-57b7f200ef24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.516291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2mml\" (UniqueName: \"kubernetes.io/projected/5217c856-fc01-4221-a551-63e793f60558-kube-api-access-q2mml\") pod \"5217c856-fc01-4221-a551-63e793f60558\" (UID: \"5217c856-fc01-4221-a551-63e793f60558\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.516260 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5217c856-fc01-4221-a551-63e793f60558-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5217c856-fc01-4221-a551-63e793f60558" (UID: "5217c856-fc01-4221-a551-63e793f60558"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.516432 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vh5p\" (UniqueName: \"kubernetes.io/projected/db77c675-16f7-46c6-bb7a-6aa18e492772-kube-api-access-9vh5p\") pod \"db77c675-16f7-46c6-bb7a-6aa18e492772\" (UID: \"db77c675-16f7-46c6-bb7a-6aa18e492772\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.516471 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mx9w\" (UniqueName: \"kubernetes.io/projected/fa83c1db-779c-4e77-8def-c9acf6560e6f-kube-api-access-4mx9w\") pod \"fa83c1db-779c-4e77-8def-c9acf6560e6f\" (UID: \"fa83c1db-779c-4e77-8def-c9acf6560e6f\") " Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.517169 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa83c1db-779c-4e77-8def-c9acf6560e6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.517196 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5217c856-fc01-4221-a551-63e793f60558-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.517208 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.517221 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db77c675-16f7-46c6-bb7a-6aa18e492772-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.517234 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b35ffca4-b990-4cf7-ac82-57b7f200ef24-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.519324 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-kube-api-access-9rwbt" (OuterVolumeSpecName: "kube-api-access-9rwbt") pod "905d71d8-55a5-42a9-a22c-bfc3c3eb3e19" (UID: "905d71d8-55a5-42a9-a22c-bfc3c3eb3e19"). InnerVolumeSpecName "kube-api-access-9rwbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.519857 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35ffca4-b990-4cf7-ac82-57b7f200ef24-kube-api-access-2rlxm" (OuterVolumeSpecName: "kube-api-access-2rlxm") pod "b35ffca4-b990-4cf7-ac82-57b7f200ef24" (UID: "b35ffca4-b990-4cf7-ac82-57b7f200ef24"). InnerVolumeSpecName "kube-api-access-2rlxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.520174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa83c1db-779c-4e77-8def-c9acf6560e6f-kube-api-access-4mx9w" (OuterVolumeSpecName: "kube-api-access-4mx9w") pod "fa83c1db-779c-4e77-8def-c9acf6560e6f" (UID: "fa83c1db-779c-4e77-8def-c9acf6560e6f"). InnerVolumeSpecName "kube-api-access-4mx9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.520303 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db77c675-16f7-46c6-bb7a-6aa18e492772-kube-api-access-9vh5p" (OuterVolumeSpecName: "kube-api-access-9vh5p") pod "db77c675-16f7-46c6-bb7a-6aa18e492772" (UID: "db77c675-16f7-46c6-bb7a-6aa18e492772"). InnerVolumeSpecName "kube-api-access-9vh5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.522804 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5217c856-fc01-4221-a551-63e793f60558-kube-api-access-q2mml" (OuterVolumeSpecName: "kube-api-access-q2mml") pod "5217c856-fc01-4221-a551-63e793f60558" (UID: "5217c856-fc01-4221-a551-63e793f60558"). InnerVolumeSpecName "kube-api-access-q2mml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.618506 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vh5p\" (UniqueName: \"kubernetes.io/projected/db77c675-16f7-46c6-bb7a-6aa18e492772-kube-api-access-9vh5p\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.618541 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mx9w\" (UniqueName: \"kubernetes.io/projected/fa83c1db-779c-4e77-8def-c9acf6560e6f-kube-api-access-4mx9w\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.618550 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rlxm\" (UniqueName: \"kubernetes.io/projected/b35ffca4-b990-4cf7-ac82-57b7f200ef24-kube-api-access-2rlxm\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.618560 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rwbt\" (UniqueName: \"kubernetes.io/projected/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19-kube-api-access-9rwbt\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:33 crc kubenswrapper[4820]: I0201 14:37:33.618570 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2mml\" (UniqueName: \"kubernetes.io/projected/5217c856-fc01-4221-a551-63e793f60558-kube-api-access-q2mml\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:34 crc kubenswrapper[4820]: I0201 14:37:34.468691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gm5js" event={"ID":"5488a765-644a-47bd-9665-afb4d8bdb6ea","Type":"ContainerStarted","Data":"b76059fba1e654409a9b4c07ad19076bccf9eee29f71b54cad3dce778b0e59ce"} Feb 01 14:37:35 crc kubenswrapper[4820]: I0201 14:37:35.959271 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:37:35 crc kubenswrapper[4820]: I0201 14:37:35.983338 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gm5js" podStartSLOduration=3.961247129 podStartE2EDuration="8.983319536s" podCreationTimestamp="2026-02-01 14:37:27 +0000 UTC" firstStartedPulling="2026-02-01 14:37:28.219467526 +0000 UTC m=+989.739833810" lastFinishedPulling="2026-02-01 14:37:33.241539933 +0000 UTC m=+994.761906217" observedRunningTime="2026-02-01 14:37:34.495886091 +0000 UTC m=+996.016252365" watchObservedRunningTime="2026-02-01 14:37:35.983319536 +0000 UTC m=+997.503685820" Feb 01 14:37:36 crc kubenswrapper[4820]: I0201 14:37:36.024735 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wxbt2"] Feb 01 14:37:36 crc kubenswrapper[4820]: I0201 14:37:36.025303 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-wxbt2" podUID="66252276-3151-4fc8-accd-e6f036d64ba5" containerName="dnsmasq-dns" containerID="cri-o://b28a153dc011e64e9a490df9f4b31552ab784ef2f8ec11f5138f823947e583f6" gracePeriod=10 Feb 01 14:37:36 crc kubenswrapper[4820]: I0201 14:37:36.486288 4820 generic.go:334] "Generic (PLEG): container finished" podID="66252276-3151-4fc8-accd-e6f036d64ba5" containerID="b28a153dc011e64e9a490df9f4b31552ab784ef2f8ec11f5138f823947e583f6" exitCode=0 Feb 01 14:37:36 crc kubenswrapper[4820]: I0201 14:37:36.486349 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wxbt2" event={"ID":"66252276-3151-4fc8-accd-e6f036d64ba5","Type":"ContainerDied","Data":"b28a153dc011e64e9a490df9f4b31552ab784ef2f8ec11f5138f823947e583f6"} Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.021727 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.176420 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-dns-svc\") pod \"66252276-3151-4fc8-accd-e6f036d64ba5\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.176492 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-sb\") pod \"66252276-3151-4fc8-accd-e6f036d64ba5\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.176534 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-nb\") pod \"66252276-3151-4fc8-accd-e6f036d64ba5\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.176572 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z26v2\" (UniqueName: \"kubernetes.io/projected/66252276-3151-4fc8-accd-e6f036d64ba5-kube-api-access-z26v2\") pod \"66252276-3151-4fc8-accd-e6f036d64ba5\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.176634 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-config\") pod \"66252276-3151-4fc8-accd-e6f036d64ba5\" (UID: \"66252276-3151-4fc8-accd-e6f036d64ba5\") " Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.210038 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66252276-3151-4fc8-accd-e6f036d64ba5-kube-api-access-z26v2" (OuterVolumeSpecName: "kube-api-access-z26v2") pod "66252276-3151-4fc8-accd-e6f036d64ba5" (UID: "66252276-3151-4fc8-accd-e6f036d64ba5"). InnerVolumeSpecName "kube-api-access-z26v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.232501 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "66252276-3151-4fc8-accd-e6f036d64ba5" (UID: "66252276-3151-4fc8-accd-e6f036d64ba5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.233166 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66252276-3151-4fc8-accd-e6f036d64ba5" (UID: "66252276-3151-4fc8-accd-e6f036d64ba5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.240986 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-config" (OuterVolumeSpecName: "config") pod "66252276-3151-4fc8-accd-e6f036d64ba5" (UID: "66252276-3151-4fc8-accd-e6f036d64ba5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.241150 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "66252276-3151-4fc8-accd-e6f036d64ba5" (UID: "66252276-3151-4fc8-accd-e6f036d64ba5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.277922 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.277956 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.277967 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.277978 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z26v2\" (UniqueName: \"kubernetes.io/projected/66252276-3151-4fc8-accd-e6f036d64ba5-kube-api-access-z26v2\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.277986 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66252276-3151-4fc8-accd-e6f036d64ba5-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.495994 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wxbt2" event={"ID":"66252276-3151-4fc8-accd-e6f036d64ba5","Type":"ContainerDied","Data":"55b2fe5c79a8716fd7c67a2a57531c6c4a6bb7fa6cc30e22e046b34d4ea23e54"} Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.496055 4820 scope.go:117] "RemoveContainer" containerID="b28a153dc011e64e9a490df9f4b31552ab784ef2f8ec11f5138f823947e583f6" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.496163 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-wxbt2" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.515975 4820 scope.go:117] "RemoveContainer" containerID="5537e1d6936e85ac8b352ab74ee7c6e08900ddfb764c8153fec2374b451be7e3" Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.530393 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wxbt2"] Feb 01 14:37:37 crc kubenswrapper[4820]: I0201 14:37:37.535715 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wxbt2"] Feb 01 14:37:38 crc kubenswrapper[4820]: I0201 14:37:38.515864 4820 generic.go:334] "Generic (PLEG): container finished" podID="5488a765-644a-47bd-9665-afb4d8bdb6ea" containerID="b76059fba1e654409a9b4c07ad19076bccf9eee29f71b54cad3dce778b0e59ce" exitCode=0 Feb 01 14:37:38 crc kubenswrapper[4820]: I0201 14:37:38.515993 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gm5js" event={"ID":"5488a765-644a-47bd-9665-afb4d8bdb6ea","Type":"ContainerDied","Data":"b76059fba1e654409a9b4c07ad19076bccf9eee29f71b54cad3dce778b0e59ce"} Feb 01 14:37:39 crc kubenswrapper[4820]: I0201 14:37:39.210328 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66252276-3151-4fc8-accd-e6f036d64ba5" path="/var/lib/kubelet/pods/66252276-3151-4fc8-accd-e6f036d64ba5/volumes" Feb 01 14:37:39 crc kubenswrapper[4820]: I0201 14:37:39.786748 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:39 crc kubenswrapper[4820]: I0201 14:37:39.920776 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-config-data\") pod \"5488a765-644a-47bd-9665-afb4d8bdb6ea\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " Feb 01 14:37:39 crc kubenswrapper[4820]: I0201 14:37:39.920866 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9mz8\" (UniqueName: \"kubernetes.io/projected/5488a765-644a-47bd-9665-afb4d8bdb6ea-kube-api-access-d9mz8\") pod \"5488a765-644a-47bd-9665-afb4d8bdb6ea\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " Feb 01 14:37:39 crc kubenswrapper[4820]: I0201 14:37:39.920938 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-combined-ca-bundle\") pod \"5488a765-644a-47bd-9665-afb4d8bdb6ea\" (UID: \"5488a765-644a-47bd-9665-afb4d8bdb6ea\") " Feb 01 14:37:39 crc kubenswrapper[4820]: I0201 14:37:39.926250 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5488a765-644a-47bd-9665-afb4d8bdb6ea-kube-api-access-d9mz8" (OuterVolumeSpecName: "kube-api-access-d9mz8") pod "5488a765-644a-47bd-9665-afb4d8bdb6ea" (UID: "5488a765-644a-47bd-9665-afb4d8bdb6ea"). InnerVolumeSpecName "kube-api-access-d9mz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:39 crc kubenswrapper[4820]: I0201 14:37:39.947970 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5488a765-644a-47bd-9665-afb4d8bdb6ea" (UID: "5488a765-644a-47bd-9665-afb4d8bdb6ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:37:39 crc kubenswrapper[4820]: I0201 14:37:39.964654 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-config-data" (OuterVolumeSpecName: "config-data") pod "5488a765-644a-47bd-9665-afb4d8bdb6ea" (UID: "5488a765-644a-47bd-9665-afb4d8bdb6ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.022733 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.022768 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9mz8\" (UniqueName: \"kubernetes.io/projected/5488a765-644a-47bd-9665-afb4d8bdb6ea-kube-api-access-d9mz8\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.022781 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488a765-644a-47bd-9665-afb4d8bdb6ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.535708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gm5js" event={"ID":"5488a765-644a-47bd-9665-afb4d8bdb6ea","Type":"ContainerDied","Data":"e2d84835f6aaaca56010c4e8c1f750b6e710853835b6977fae61f7f138816138"} Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.535752 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2d84835f6aaaca56010c4e8c1f750b6e710853835b6977fae61f7f138816138" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.535796 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gm5js" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797334 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rwpvb"] Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797640 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35ffca4-b990-4cf7-ac82-57b7f200ef24" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797653 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35ffca4-b990-4cf7-ac82-57b7f200ef24" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797660 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66252276-3151-4fc8-accd-e6f036d64ba5" containerName="init" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797676 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="66252276-3151-4fc8-accd-e6f036d64ba5" containerName="init" Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797690 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488a765-644a-47bd-9665-afb4d8bdb6ea" containerName="keystone-db-sync" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797696 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488a765-644a-47bd-9665-afb4d8bdb6ea" containerName="keystone-db-sync" Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797708 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5217c856-fc01-4221-a551-63e793f60558" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797714 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5217c856-fc01-4221-a551-63e793f60558" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797724 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45963d5-f579-41a2-81d5-399be2d3ff53" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797730 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45963d5-f579-41a2-81d5-399be2d3ff53" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797740 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa83c1db-779c-4e77-8def-c9acf6560e6f" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797746 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa83c1db-779c-4e77-8def-c9acf6560e6f" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797761 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db77c675-16f7-46c6-bb7a-6aa18e492772" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797766 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="db77c675-16f7-46c6-bb7a-6aa18e492772" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797776 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66252276-3151-4fc8-accd-e6f036d64ba5" containerName="dnsmasq-dns" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797782 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="66252276-3151-4fc8-accd-e6f036d64ba5" containerName="dnsmasq-dns" Feb 01 14:37:40 crc kubenswrapper[4820]: E0201 14:37:40.797790 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905d71d8-55a5-42a9-a22c-bfc3c3eb3e19" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797797 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="905d71d8-55a5-42a9-a22c-bfc3c3eb3e19" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797944 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5217c856-fc01-4221-a551-63e793f60558" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797958 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35ffca4-b990-4cf7-ac82-57b7f200ef24" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797967 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="db77c675-16f7-46c6-bb7a-6aa18e492772" containerName="mariadb-account-create-update" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797978 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="66252276-3151-4fc8-accd-e6f036d64ba5" containerName="dnsmasq-dns" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797986 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5488a765-644a-47bd-9665-afb4d8bdb6ea" containerName="keystone-db-sync" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.797992 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="905d71d8-55a5-42a9-a22c-bfc3c3eb3e19" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.798002 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45963d5-f579-41a2-81d5-399be2d3ff53" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.798010 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa83c1db-779c-4e77-8def-c9acf6560e6f" containerName="mariadb-database-create" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.798494 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.805501 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rwpvb"] Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.808066 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.808248 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9xht9" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.808384 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.809916 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.810133 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.818843 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8d5cl"] Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.820359 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.886602 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8d5cl"] Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.936846 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-dns-svc\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.936944 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-config-data\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.936969 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82sqs\" (UniqueName: \"kubernetes.io/projected/9cef020d-e1b2-4b04-8576-4dc7efe580fd-kube-api-access-82sqs\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.937007 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-fernet-keys\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.937047 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-combined-ca-bundle\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.937073 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-credential-keys\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.937127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-config\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.937181 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-scripts\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.937238 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68z7g\" (UniqueName: \"kubernetes.io/projected/a04b1df7-0713-46cd-a625-0a4b8bec72ae-kube-api-access-68z7g\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.937263 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:40 crc kubenswrapper[4820]: I0201 14:37:40.937285 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.038835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-dns-svc\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.038944 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-config-data\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.038972 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82sqs\" (UniqueName: \"kubernetes.io/projected/9cef020d-e1b2-4b04-8576-4dc7efe580fd-kube-api-access-82sqs\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039008 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-fernet-keys\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039042 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-combined-ca-bundle\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-credential-keys\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-config\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039152 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-scripts\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68z7g\" (UniqueName: \"kubernetes.io/projected/a04b1df7-0713-46cd-a625-0a4b8bec72ae-kube-api-access-68z7g\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039225 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039246 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.039821 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-dns-svc\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.040216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-config\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.040280 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.040406 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.046693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-credential-keys\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.047255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-config-data\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.048485 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-combined-ca-bundle\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.051514 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-fernet-keys\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.059330 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-scripts\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.078728 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82sqs\" (UniqueName: \"kubernetes.io/projected/9cef020d-e1b2-4b04-8576-4dc7efe580fd-kube-api-access-82sqs\") pod \"dnsmasq-dns-67795cd9-8d5cl\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.082553 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wpqvp"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.086438 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68z7g\" (UniqueName: \"kubernetes.io/projected/a04b1df7-0713-46cd-a625-0a4b8bec72ae-kube-api-access-68z7g\") pod \"keystone-bootstrap-rwpvb\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.089712 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.098187 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2r5j4" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.098403 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.110235 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.118216 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.142922 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wpqvp"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.149243 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.234268 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-s4w47"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.235487 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.241498 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.241662 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.241790 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-n6mrb" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.242738 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-config-data\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.242797 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8bz\" (UniqueName: \"kubernetes.io/projected/857bc684-4d17-461f-9183-6c0a7ac89845-kube-api-access-jg8bz\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.242866 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-db-sync-config-data\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.242944 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-combined-ca-bundle\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.242987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-scripts\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.243023 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/857bc684-4d17-461f-9183-6c0a7ac89845-etc-machine-id\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.269456 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-s4w47"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.345698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvthc\" (UniqueName: \"kubernetes.io/projected/5668430a-a444-4146-b357-30f626e2e9d6-kube-api-access-tvthc\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.345749 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-config\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.345799 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-db-sync-config-data\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.345837 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-combined-ca-bundle\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.345896 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-combined-ca-bundle\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.345937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-scripts\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.345959 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/857bc684-4d17-461f-9183-6c0a7ac89845-etc-machine-id\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.345988 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-config-data\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.346009 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8bz\" (UniqueName: \"kubernetes.io/projected/857bc684-4d17-461f-9183-6c0a7ac89845-kube-api-access-jg8bz\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.346558 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/857bc684-4d17-461f-9183-6c0a7ac89845-etc-machine-id\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.366926 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-combined-ca-bundle\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.367109 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-config-data\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.367884 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-scripts\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.374166 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zr8wv"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.375705 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-db-sync-config-data\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.375824 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.380233 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.383322 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-54chm" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.385236 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mg8xp"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.385525 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8bz\" (UniqueName: \"kubernetes.io/projected/857bc684-4d17-461f-9183-6c0a7ac89845-kube-api-access-jg8bz\") pod \"cinder-db-sync-wpqvp\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.386812 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.396693 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.396786 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5crjx" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.397097 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.410650 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8d5cl"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.449860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7w5w\" (UniqueName: \"kubernetes.io/projected/8596fa26-8ba1-4348-8493-3df37f0cfcaa-kube-api-access-n7w5w\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.449948 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvthc\" (UniqueName: \"kubernetes.io/projected/5668430a-a444-4146-b357-30f626e2e9d6-kube-api-access-tvthc\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.449976 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-config\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.449998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-db-sync-config-data\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.450039 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-combined-ca-bundle\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.450078 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-combined-ca-bundle\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.462190 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zr8wv"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.463363 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.469707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-combined-ca-bundle\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.469837 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-config\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.479562 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvthc\" (UniqueName: \"kubernetes.io/projected/5668430a-a444-4146-b357-30f626e2e9d6-kube-api-access-tvthc\") pod \"neutron-db-sync-s4w47\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.530583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mg8xp"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.551208 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7w5w\" (UniqueName: \"kubernetes.io/projected/8596fa26-8ba1-4348-8493-3df37f0cfcaa-kube-api-access-n7w5w\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.551536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb06733-7a89-4153-a16b-e69317c5f8a3-logs\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.551567 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-db-sync-config-data\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.551596 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-scripts\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.551623 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4hkh\" (UniqueName: \"kubernetes.io/projected/1cb06733-7a89-4153-a16b-e69317c5f8a3-kube-api-access-g4hkh\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.551645 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-combined-ca-bundle\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.551664 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-combined-ca-bundle\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.551712 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-config-data\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.555218 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-combined-ca-bundle\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.556176 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-db-sync-config-data\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.580827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7w5w\" (UniqueName: \"kubernetes.io/projected/8596fa26-8ba1-4348-8493-3df37f0cfcaa-kube-api-access-n7w5w\") pod \"barbican-db-sync-zr8wv\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.581320 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.591791 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.596057 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.598501 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.613147 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.617214 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.630613 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-s4w47" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.632049 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.653964 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qspv5\" (UniqueName: \"kubernetes.io/projected/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-kube-api-access-qspv5\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.654055 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-config-data\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.654102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.654125 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.654175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-config-data\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.654194 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-scripts\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.655037 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb06733-7a89-4153-a16b-e69317c5f8a3-logs\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.655107 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-scripts\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.655142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-run-httpd\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.655170 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hkh\" (UniqueName: \"kubernetes.io/projected/1cb06733-7a89-4153-a16b-e69317c5f8a3-kube-api-access-g4hkh\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.655196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-log-httpd\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.655221 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-combined-ca-bundle\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.657256 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb06733-7a89-4153-a16b-e69317c5f8a3-logs\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.663377 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-scripts\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.667090 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-config-data\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.676067 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-combined-ca-bundle\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.682452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4hkh\" (UniqueName: \"kubernetes.io/projected/1cb06733-7a89-4153-a16b-e69317c5f8a3-kube-api-access-g4hkh\") pod \"placement-db-sync-mg8xp\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.700446 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.718706 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.744072 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mg8xp" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.759640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-config-data\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.759699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-scripts\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.759755 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.759814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-config\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.759861 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-run-httpd\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.759905 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-log-httpd\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.759941 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.759971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qspv5\" (UniqueName: \"kubernetes.io/projected/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-kube-api-access-qspv5\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.760023 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmxk\" (UniqueName: \"kubernetes.io/projected/dda189ec-fa75-4457-ad6b-1b74df127b0c-kube-api-access-mnmxk\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.760039 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.760056 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.760072 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.760950 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-run-httpd\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.761212 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-log-httpd\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.763391 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.765275 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-scripts\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.767435 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.777898 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-config-data\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.780698 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qspv5\" (UniqueName: \"kubernetes.io/projected/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-kube-api-access-qspv5\") pod \"ceilometer-0\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.865199 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-config\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.866382 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.866615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnmxk\" (UniqueName: \"kubernetes.io/projected/dda189ec-fa75-4457-ad6b-1b74df127b0c-kube-api-access-mnmxk\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.866750 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.866977 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.866685 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-config\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.867227 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.868023 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.868372 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.888039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnmxk\" (UniqueName: \"kubernetes.io/projected/dda189ec-fa75-4457-ad6b-1b74df127b0c-kube-api-access-mnmxk\") pod \"dnsmasq-dns-5b6dbdb6f5-jn6xl\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.890287 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8d5cl"] Feb 01 14:37:41 crc kubenswrapper[4820]: W0201 14:37:41.899666 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cef020d_e1b2_4b04_8576_4dc7efe580fd.slice/crio-1be73682058fed5eecd129a5f97779604293cc424ae3f7ed0853a2be4fb65308 WatchSource:0}: Error finding container 1be73682058fed5eecd129a5f97779604293cc424ae3f7ed0853a2be4fb65308: Status 404 returned error can't find the container with id 1be73682058fed5eecd129a5f97779604293cc424ae3f7ed0853a2be4fb65308 Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.931170 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:37:41 crc kubenswrapper[4820]: I0201 14:37:41.995504 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rwpvb"] Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.002251 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:42 crc kubenswrapper[4820]: W0201 14:37:42.034089 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda04b1df7_0713_46cd_a625_0a4b8bec72ae.slice/crio-063f6f63e9b404cc1401804b1508475ce2a7aa92ed3619ef588d34c46b6887e5 WatchSource:0}: Error finding container 063f6f63e9b404cc1401804b1508475ce2a7aa92ed3619ef588d34c46b6887e5: Status 404 returned error can't find the container with id 063f6f63e9b404cc1401804b1508475ce2a7aa92ed3619ef588d34c46b6887e5 Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.142209 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wpqvp"] Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.253100 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zr8wv"] Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.299107 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-s4w47"] Feb 01 14:37:42 crc kubenswrapper[4820]: W0201 14:37:42.319214 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5668430a_a444_4146_b357_30f626e2e9d6.slice/crio-52b7f0bb2e1473e494933831f142f37602b31266a6cec12930732f1f40093552 WatchSource:0}: Error finding container 52b7f0bb2e1473e494933831f142f37602b31266a6cec12930732f1f40093552: Status 404 returned error can't find the container with id 52b7f0bb2e1473e494933831f142f37602b31266a6cec12930732f1f40093552 Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.452082 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mg8xp"] Feb 01 14:37:42 crc kubenswrapper[4820]: W0201 14:37:42.476352 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cb06733_7a89_4153_a16b_e69317c5f8a3.slice/crio-dfe0a53388f6bbab268da9d2dd08d3c6358fcac2f97973839d9f6c1ba73b2451 WatchSource:0}: Error finding container dfe0a53388f6bbab268da9d2dd08d3c6358fcac2f97973839d9f6c1ba73b2451: Status 404 returned error can't find the container with id dfe0a53388f6bbab268da9d2dd08d3c6358fcac2f97973839d9f6c1ba73b2451 Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.560521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wpqvp" event={"ID":"857bc684-4d17-461f-9183-6c0a7ac89845","Type":"ContainerStarted","Data":"e8f12a4786490ef01678bb0b9959661b5af49ed7449338204316b90bd47cbbaf"} Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.561889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zr8wv" event={"ID":"8596fa26-8ba1-4348-8493-3df37f0cfcaa","Type":"ContainerStarted","Data":"db5bdb7a511381742a42692b31e9b77c5674cd30861abf0d071bfa0d3726b61b"} Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.563758 4820 generic.go:334] "Generic (PLEG): container finished" podID="9cef020d-e1b2-4b04-8576-4dc7efe580fd" containerID="e70f721fed30c0ee9984c003ddfc930c06ab26547c5517160e8a033e04bfd7fc" exitCode=0 Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.563804 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-8d5cl" event={"ID":"9cef020d-e1b2-4b04-8576-4dc7efe580fd","Type":"ContainerDied","Data":"e70f721fed30c0ee9984c003ddfc930c06ab26547c5517160e8a033e04bfd7fc"} Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.563819 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-8d5cl" event={"ID":"9cef020d-e1b2-4b04-8576-4dc7efe580fd","Type":"ContainerStarted","Data":"1be73682058fed5eecd129a5f97779604293cc424ae3f7ed0853a2be4fb65308"} Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.570502 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mg8xp" event={"ID":"1cb06733-7a89-4153-a16b-e69317c5f8a3","Type":"ContainerStarted","Data":"dfe0a53388f6bbab268da9d2dd08d3c6358fcac2f97973839d9f6c1ba73b2451"} Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.572308 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.573526 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rwpvb" event={"ID":"a04b1df7-0713-46cd-a625-0a4b8bec72ae","Type":"ContainerStarted","Data":"62a7b3084fc031919202118c8db803cf0f9ff03d330186cce16c1d6cbc815a9e"} Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.573562 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rwpvb" event={"ID":"a04b1df7-0713-46cd-a625-0a4b8bec72ae","Type":"ContainerStarted","Data":"063f6f63e9b404cc1401804b1508475ce2a7aa92ed3619ef588d34c46b6887e5"} Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.575381 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-s4w47" event={"ID":"5668430a-a444-4146-b357-30f626e2e9d6","Type":"ContainerStarted","Data":"aa0a5f8792601307a279d9ca93611812efbdfaba39271a4bb62827725f41699f"} Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.575403 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-s4w47" event={"ID":"5668430a-a444-4146-b357-30f626e2e9d6","Type":"ContainerStarted","Data":"52b7f0bb2e1473e494933831f142f37602b31266a6cec12930732f1f40093552"} Feb 01 14:37:42 crc kubenswrapper[4820]: W0201 14:37:42.591025 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcf7fd8f_91f1_4742_bcdb_351da5ded25a.slice/crio-e2c32e105b37fbd29c13e786a7280a462b7f2fc37cf7acefc7473d5be1f93646 WatchSource:0}: Error finding container e2c32e105b37fbd29c13e786a7280a462b7f2fc37cf7acefc7473d5be1f93646: Status 404 returned error can't find the container with id e2c32e105b37fbd29c13e786a7280a462b7f2fc37cf7acefc7473d5be1f93646 Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.608861 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rwpvb" podStartSLOduration=2.608841622 podStartE2EDuration="2.608841622s" podCreationTimestamp="2026-02-01 14:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:37:42.598399147 +0000 UTC m=+1004.118765431" watchObservedRunningTime="2026-02-01 14:37:42.608841622 +0000 UTC m=+1004.129207896" Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.613521 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-s4w47" podStartSLOduration=1.613482855 podStartE2EDuration="1.613482855s" podCreationTimestamp="2026-02-01 14:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:37:42.612351977 +0000 UTC m=+1004.132718261" watchObservedRunningTime="2026-02-01 14:37:42.613482855 +0000 UTC m=+1004.133849139" Feb 01 14:37:42 crc kubenswrapper[4820]: W0201 14:37:42.667236 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddda189ec_fa75_4457_ad6b_1b74df127b0c.slice/crio-4616ea588a3e30a099dbec2af33ee0a3443a65b66af79332d0a0fd8ec9d51dd5 WatchSource:0}: Error finding container 4616ea588a3e30a099dbec2af33ee0a3443a65b66af79332d0a0fd8ec9d51dd5: Status 404 returned error can't find the container with id 4616ea588a3e30a099dbec2af33ee0a3443a65b66af79332d0a0fd8ec9d51dd5 Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.670744 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl"] Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.913368 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.915492 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.998114 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-sb\") pod \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.998220 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-config\") pod \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.998263 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-dns-svc\") pod \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.998323 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82sqs\" (UniqueName: \"kubernetes.io/projected/9cef020d-e1b2-4b04-8576-4dc7efe580fd-kube-api-access-82sqs\") pod \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " Feb 01 14:37:42 crc kubenswrapper[4820]: I0201 14:37:42.998420 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-nb\") pod \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\" (UID: \"9cef020d-e1b2-4b04-8576-4dc7efe580fd\") " Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.027929 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cef020d-e1b2-4b04-8576-4dc7efe580fd-kube-api-access-82sqs" (OuterVolumeSpecName: "kube-api-access-82sqs") pod "9cef020d-e1b2-4b04-8576-4dc7efe580fd" (UID: "9cef020d-e1b2-4b04-8576-4dc7efe580fd"). InnerVolumeSpecName "kube-api-access-82sqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.036259 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cef020d-e1b2-4b04-8576-4dc7efe580fd" (UID: "9cef020d-e1b2-4b04-8576-4dc7efe580fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.060411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9cef020d-e1b2-4b04-8576-4dc7efe580fd" (UID: "9cef020d-e1b2-4b04-8576-4dc7efe580fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.064044 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-config" (OuterVolumeSpecName: "config") pod "9cef020d-e1b2-4b04-8576-4dc7efe580fd" (UID: "9cef020d-e1b2-4b04-8576-4dc7efe580fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.065433 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9cef020d-e1b2-4b04-8576-4dc7efe580fd" (UID: "9cef020d-e1b2-4b04-8576-4dc7efe580fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.100547 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.100582 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.100592 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.100620 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cef020d-e1b2-4b04-8576-4dc7efe580fd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.100631 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82sqs\" (UniqueName: \"kubernetes.io/projected/9cef020d-e1b2-4b04-8576-4dc7efe580fd-kube-api-access-82sqs\") on node \"crc\" DevicePath \"\"" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.591998 4820 generic.go:334] "Generic (PLEG): container finished" podID="dda189ec-fa75-4457-ad6b-1b74df127b0c" containerID="c308d66f737b30b0c8bf257ee150954ad305db019336d8cf1ce28a6e8903a3f6" exitCode=0 Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.592143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" event={"ID":"dda189ec-fa75-4457-ad6b-1b74df127b0c","Type":"ContainerDied","Data":"c308d66f737b30b0c8bf257ee150954ad305db019336d8cf1ce28a6e8903a3f6"} Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.592344 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" event={"ID":"dda189ec-fa75-4457-ad6b-1b74df127b0c","Type":"ContainerStarted","Data":"4616ea588a3e30a099dbec2af33ee0a3443a65b66af79332d0a0fd8ec9d51dd5"} Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.595635 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcf7fd8f-91f1-4742-bcdb-351da5ded25a","Type":"ContainerStarted","Data":"e2c32e105b37fbd29c13e786a7280a462b7f2fc37cf7acefc7473d5be1f93646"} Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.604861 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-8d5cl" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.604849 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-8d5cl" event={"ID":"9cef020d-e1b2-4b04-8576-4dc7efe580fd","Type":"ContainerDied","Data":"1be73682058fed5eecd129a5f97779604293cc424ae3f7ed0853a2be4fb65308"} Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.604926 4820 scope.go:117] "RemoveContainer" containerID="e70f721fed30c0ee9984c003ddfc930c06ab26547c5517160e8a033e04bfd7fc" Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.736204 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8d5cl"] Feb 01 14:37:43 crc kubenswrapper[4820]: I0201 14:37:43.741157 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8d5cl"] Feb 01 14:37:44 crc kubenswrapper[4820]: I0201 14:37:44.625263 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" event={"ID":"dda189ec-fa75-4457-ad6b-1b74df127b0c","Type":"ContainerStarted","Data":"eb41abd1ad58d87c9f4b0faf91fd472dc995b0d23fb18cfb12e25e74d82b128d"} Feb 01 14:37:44 crc kubenswrapper[4820]: I0201 14:37:44.625712 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:44 crc kubenswrapper[4820]: I0201 14:37:44.646329 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" podStartSLOduration=3.646309742 podStartE2EDuration="3.646309742s" podCreationTimestamp="2026-02-01 14:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:37:44.643537634 +0000 UTC m=+1006.163903938" watchObservedRunningTime="2026-02-01 14:37:44.646309742 +0000 UTC m=+1006.166676016" Feb 01 14:37:45 crc kubenswrapper[4820]: I0201 14:37:45.207650 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cef020d-e1b2-4b04-8576-4dc7efe580fd" path="/var/lib/kubelet/pods/9cef020d-e1b2-4b04-8576-4dc7efe580fd/volumes" Feb 01 14:37:46 crc kubenswrapper[4820]: I0201 14:37:46.644613 4820 generic.go:334] "Generic (PLEG): container finished" podID="a04b1df7-0713-46cd-a625-0a4b8bec72ae" containerID="62a7b3084fc031919202118c8db803cf0f9ff03d330186cce16c1d6cbc815a9e" exitCode=0 Feb 01 14:37:46 crc kubenswrapper[4820]: I0201 14:37:46.644856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rwpvb" event={"ID":"a04b1df7-0713-46cd-a625-0a4b8bec72ae","Type":"ContainerDied","Data":"62a7b3084fc031919202118c8db803cf0f9ff03d330186cce16c1d6cbc815a9e"} Feb 01 14:37:52 crc kubenswrapper[4820]: I0201 14:37:52.005421 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:37:52 crc kubenswrapper[4820]: I0201 14:37:52.072994 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-zqlvd"] Feb 01 14:37:52 crc kubenswrapper[4820]: I0201 14:37:52.073493 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerName="dnsmasq-dns" containerID="cri-o://c124a27f1689873448dd4e87ad65b2d239e480812a13b18581e29590db2d5e75" gracePeriod=10 Feb 01 14:37:52 crc kubenswrapper[4820]: I0201 14:37:52.712659 4820 generic.go:334] "Generic (PLEG): container finished" podID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerID="c124a27f1689873448dd4e87ad65b2d239e480812a13b18581e29590db2d5e75" exitCode=0 Feb 01 14:37:52 crc kubenswrapper[4820]: I0201 14:37:52.712717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" event={"ID":"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0","Type":"ContainerDied","Data":"c124a27f1689873448dd4e87ad65b2d239e480812a13b18581e29590db2d5e75"} Feb 01 14:37:55 crc kubenswrapper[4820]: I0201 14:37:55.957736 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 01 14:38:00 crc kubenswrapper[4820]: I0201 14:38:00.958198 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.188652 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.413353 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-config-data\") pod \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.413438 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-fernet-keys\") pod \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.413527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-credential-keys\") pod \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.413554 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68z7g\" (UniqueName: \"kubernetes.io/projected/a04b1df7-0713-46cd-a625-0a4b8bec72ae-kube-api-access-68z7g\") pod \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.413580 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-combined-ca-bundle\") pod \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.413651 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-scripts\") pod \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\" (UID: \"a04b1df7-0713-46cd-a625-0a4b8bec72ae\") " Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.424046 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a04b1df7-0713-46cd-a625-0a4b8bec72ae" (UID: "a04b1df7-0713-46cd-a625-0a4b8bec72ae"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.432098 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04b1df7-0713-46cd-a625-0a4b8bec72ae-kube-api-access-68z7g" (OuterVolumeSpecName: "kube-api-access-68z7g") pod "a04b1df7-0713-46cd-a625-0a4b8bec72ae" (UID: "a04b1df7-0713-46cd-a625-0a4b8bec72ae"). InnerVolumeSpecName "kube-api-access-68z7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.434063 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a04b1df7-0713-46cd-a625-0a4b8bec72ae" (UID: "a04b1df7-0713-46cd-a625-0a4b8bec72ae"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.435978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-scripts" (OuterVolumeSpecName: "scripts") pod "a04b1df7-0713-46cd-a625-0a4b8bec72ae" (UID: "a04b1df7-0713-46cd-a625-0a4b8bec72ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.458205 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-config-data" (OuterVolumeSpecName: "config-data") pod "a04b1df7-0713-46cd-a625-0a4b8bec72ae" (UID: "a04b1df7-0713-46cd-a625-0a4b8bec72ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.505047 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a04b1df7-0713-46cd-a625-0a4b8bec72ae" (UID: "a04b1df7-0713-46cd-a625-0a4b8bec72ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.515525 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.515568 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68z7g\" (UniqueName: \"kubernetes.io/projected/a04b1df7-0713-46cd-a625-0a4b8bec72ae-kube-api-access-68z7g\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.515583 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.515593 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.515604 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.515614 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a04b1df7-0713-46cd-a625-0a4b8bec72ae-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.791101 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rwpvb" event={"ID":"a04b1df7-0713-46cd-a625-0a4b8bec72ae","Type":"ContainerDied","Data":"063f6f63e9b404cc1401804b1508475ce2a7aa92ed3619ef588d34c46b6887e5"} Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.791155 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="063f6f63e9b404cc1401804b1508475ce2a7aa92ed3619ef588d34c46b6887e5" Feb 01 14:38:01 crc kubenswrapper[4820]: I0201 14:38:01.791217 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rwpvb" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.259559 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rwpvb"] Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.265285 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rwpvb"] Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.363511 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vd4hp"] Feb 01 14:38:02 crc kubenswrapper[4820]: E0201 14:38:02.363960 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cef020d-e1b2-4b04-8576-4dc7efe580fd" containerName="init" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.363979 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cef020d-e1b2-4b04-8576-4dc7efe580fd" containerName="init" Feb 01 14:38:02 crc kubenswrapper[4820]: E0201 14:38:02.364000 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04b1df7-0713-46cd-a625-0a4b8bec72ae" containerName="keystone-bootstrap" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.364009 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04b1df7-0713-46cd-a625-0a4b8bec72ae" containerName="keystone-bootstrap" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.364212 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04b1df7-0713-46cd-a625-0a4b8bec72ae" containerName="keystone-bootstrap" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.364244 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cef020d-e1b2-4b04-8576-4dc7efe580fd" containerName="init" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.364914 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.373425 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vd4hp"] Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.378100 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.378113 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.378145 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9xht9" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.378397 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.383116 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.531712 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-credential-keys\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.531776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-fernet-keys\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.531806 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtr2\" (UniqueName: \"kubernetes.io/projected/82e23cb3-1af3-46a1-9805-8bf8579b1991-kube-api-access-7vtr2\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.531836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-combined-ca-bundle\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.531889 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-config-data\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.531906 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-scripts\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.633049 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-credential-keys\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.633442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-fernet-keys\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.633485 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtr2\" (UniqueName: \"kubernetes.io/projected/82e23cb3-1af3-46a1-9805-8bf8579b1991-kube-api-access-7vtr2\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.633511 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-combined-ca-bundle\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.633557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-config-data\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.633577 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-scripts\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.637445 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-credential-keys\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.638285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-combined-ca-bundle\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.638464 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-config-data\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.638593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-fernet-keys\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.638884 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-scripts\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.653993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtr2\" (UniqueName: \"kubernetes.io/projected/82e23cb3-1af3-46a1-9805-8bf8579b1991-kube-api-access-7vtr2\") pod \"keystone-bootstrap-vd4hp\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:02 crc kubenswrapper[4820]: I0201 14:38:02.691921 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:03 crc kubenswrapper[4820]: I0201 14:38:03.209197 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04b1df7-0713-46cd-a625-0a4b8bec72ae" path="/var/lib/kubelet/pods/a04b1df7-0713-46cd-a625-0a4b8bec72ae/volumes" Feb 01 14:38:04 crc kubenswrapper[4820]: E0201 14:38:04.264203 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 01 14:38:04 crc kubenswrapper[4820]: E0201 14:38:04.264357 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg8bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wpqvp_openstack(857bc684-4d17-461f-9183-6c0a7ac89845): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 14:38:04 crc kubenswrapper[4820]: E0201 14:38:04.265548 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wpqvp" podUID="857bc684-4d17-461f-9183-6c0a7ac89845" Feb 01 14:38:04 crc kubenswrapper[4820]: E0201 14:38:04.545436 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 01 14:38:04 crc kubenswrapper[4820]: E0201 14:38:04.545642 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfbh678hc5hb7h696h586hbfh566h587h7h66h54hb4h567hd6h568h65fh689h8h5f8h6h699h9ch547h544h556hb9h65ch5d6h68ch5fchcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qspv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(dcf7fd8f-91f1-4742-bcdb-351da5ded25a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.802923 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.811908 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.811909 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-zqlvd" event={"ID":"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0","Type":"ContainerDied","Data":"e1c37c7ba35230d86642f09278104a89363c5ad086926688b2fcfb8a22905e48"} Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.812223 4820 scope.go:117] "RemoveContainer" containerID="c124a27f1689873448dd4e87ad65b2d239e480812a13b18581e29590db2d5e75" Feb 01 14:38:04 crc kubenswrapper[4820]: E0201 14:38:04.813576 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-wpqvp" podUID="857bc684-4d17-461f-9183-6c0a7ac89845" Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.836362 4820 scope.go:117] "RemoveContainer" containerID="2ad12fc0e79ea051659042da81c0b014dfba2c0715ad84ce42a9eb0d8612dab2" Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.971342 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vd4hp"] Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.973290 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59q7x\" (UniqueName: \"kubernetes.io/projected/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-kube-api-access-59q7x\") pod \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.973436 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-config\") pod \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.973497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-dns-svc\") pod \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.973557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-sb\") pod \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.973617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-nb\") pod \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\" (UID: \"6cf74a95-6bc8-41b1-afb0-662e3a1c71b0\") " Feb 01 14:38:04 crc kubenswrapper[4820]: I0201 14:38:04.979748 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-kube-api-access-59q7x" (OuterVolumeSpecName: "kube-api-access-59q7x") pod "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" (UID: "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0"). InnerVolumeSpecName "kube-api-access-59q7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.016331 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" (UID: "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.021766 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" (UID: "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.027916 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" (UID: "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.031048 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-config" (OuterVolumeSpecName: "config") pod "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" (UID: "6cf74a95-6bc8-41b1-afb0-662e3a1c71b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.075209 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.075287 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.075298 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.075309 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.075318 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59q7x\" (UniqueName: \"kubernetes.io/projected/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0-kube-api-access-59q7x\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.145863 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-zqlvd"] Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.154044 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-zqlvd"] Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.209721 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" path="/var/lib/kubelet/pods/6cf74a95-6bc8-41b1-afb0-662e3a1c71b0/volumes" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.826710 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vd4hp" event={"ID":"82e23cb3-1af3-46a1-9805-8bf8579b1991","Type":"ContainerStarted","Data":"24576896bb05561754204963db5f31ce0a8396beffd0275c37125980d58b990b"} Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.827038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vd4hp" event={"ID":"82e23cb3-1af3-46a1-9805-8bf8579b1991","Type":"ContainerStarted","Data":"ba34fa0d3e4061bf171dd381d98389357805191dbe2297528a208b2794306198"} Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.829069 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zr8wv" event={"ID":"8596fa26-8ba1-4348-8493-3df37f0cfcaa","Type":"ContainerStarted","Data":"a7d2a73a3aa9473404e974cca34cc6c012995d48099f1f8616bc193a93e55d6b"} Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.831992 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mg8xp" event={"ID":"1cb06733-7a89-4153-a16b-e69317c5f8a3","Type":"ContainerStarted","Data":"540491a79e332edc73bbedc80aaf79b4a2df22da101d1884137685b5b7ff83a9"} Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.861654 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vd4hp" podStartSLOduration=3.861625224 podStartE2EDuration="3.861625224s" podCreationTimestamp="2026-02-01 14:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:05.843098063 +0000 UTC m=+1027.363464377" watchObservedRunningTime="2026-02-01 14:38:05.861625224 +0000 UTC m=+1027.381991528" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.868040 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zr8wv" podStartSLOduration=2.646157134 podStartE2EDuration="24.86801705s" podCreationTimestamp="2026-02-01 14:37:41 +0000 UTC" firstStartedPulling="2026-02-01 14:37:42.286751048 +0000 UTC m=+1003.807117332" lastFinishedPulling="2026-02-01 14:38:04.508610964 +0000 UTC m=+1026.028977248" observedRunningTime="2026-02-01 14:38:05.855146436 +0000 UTC m=+1027.375512740" watchObservedRunningTime="2026-02-01 14:38:05.86801705 +0000 UTC m=+1027.388383344" Feb 01 14:38:05 crc kubenswrapper[4820]: I0201 14:38:05.880807 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mg8xp" podStartSLOduration=2.835777592 podStartE2EDuration="24.880785781s" podCreationTimestamp="2026-02-01 14:37:41 +0000 UTC" firstStartedPulling="2026-02-01 14:37:42.479069751 +0000 UTC m=+1003.999436025" lastFinishedPulling="2026-02-01 14:38:04.52407793 +0000 UTC m=+1026.044444214" observedRunningTime="2026-02-01 14:38:05.8712824 +0000 UTC m=+1027.391648694" watchObservedRunningTime="2026-02-01 14:38:05.880785781 +0000 UTC m=+1027.401152065" Feb 01 14:38:06 crc kubenswrapper[4820]: I0201 14:38:06.843174 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcf7fd8f-91f1-4742-bcdb-351da5ded25a","Type":"ContainerStarted","Data":"99b5bbb4f8a5a494992d1807c34ffd7c227efa7601613ac7f4adbaabad0f1718"} Feb 01 14:38:09 crc kubenswrapper[4820]: I0201 14:38:09.891292 4820 generic.go:334] "Generic (PLEG): container finished" podID="1cb06733-7a89-4153-a16b-e69317c5f8a3" containerID="540491a79e332edc73bbedc80aaf79b4a2df22da101d1884137685b5b7ff83a9" exitCode=0 Feb 01 14:38:09 crc kubenswrapper[4820]: I0201 14:38:09.891385 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mg8xp" event={"ID":"1cb06733-7a89-4153-a16b-e69317c5f8a3","Type":"ContainerDied","Data":"540491a79e332edc73bbedc80aaf79b4a2df22da101d1884137685b5b7ff83a9"} Feb 01 14:38:09 crc kubenswrapper[4820]: I0201 14:38:09.893150 4820 generic.go:334] "Generic (PLEG): container finished" podID="82e23cb3-1af3-46a1-9805-8bf8579b1991" containerID="24576896bb05561754204963db5f31ce0a8396beffd0275c37125980d58b990b" exitCode=0 Feb 01 14:38:09 crc kubenswrapper[4820]: I0201 14:38:09.893179 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vd4hp" event={"ID":"82e23cb3-1af3-46a1-9805-8bf8579b1991","Type":"ContainerDied","Data":"24576896bb05561754204963db5f31ce0a8396beffd0275c37125980d58b990b"} Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.276085 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mg8xp" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.285650 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376237 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-fernet-keys\") pod \"82e23cb3-1af3-46a1-9805-8bf8579b1991\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376295 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-config-data\") pod \"1cb06733-7a89-4153-a16b-e69317c5f8a3\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-combined-ca-bundle\") pod \"1cb06733-7a89-4153-a16b-e69317c5f8a3\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376542 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-scripts\") pod \"1cb06733-7a89-4153-a16b-e69317c5f8a3\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376566 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4hkh\" (UniqueName: \"kubernetes.io/projected/1cb06733-7a89-4153-a16b-e69317c5f8a3-kube-api-access-g4hkh\") pod \"1cb06733-7a89-4153-a16b-e69317c5f8a3\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376599 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vtr2\" (UniqueName: \"kubernetes.io/projected/82e23cb3-1af3-46a1-9805-8bf8579b1991-kube-api-access-7vtr2\") pod \"82e23cb3-1af3-46a1-9805-8bf8579b1991\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376618 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb06733-7a89-4153-a16b-e69317c5f8a3-logs\") pod \"1cb06733-7a89-4153-a16b-e69317c5f8a3\" (UID: \"1cb06733-7a89-4153-a16b-e69317c5f8a3\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376643 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-combined-ca-bundle\") pod \"82e23cb3-1af3-46a1-9805-8bf8579b1991\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376666 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-credential-keys\") pod \"82e23cb3-1af3-46a1-9805-8bf8579b1991\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376695 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-config-data\") pod \"82e23cb3-1af3-46a1-9805-8bf8579b1991\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.376758 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-scripts\") pod \"82e23cb3-1af3-46a1-9805-8bf8579b1991\" (UID: \"82e23cb3-1af3-46a1-9805-8bf8579b1991\") " Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.377495 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb06733-7a89-4153-a16b-e69317c5f8a3-logs" (OuterVolumeSpecName: "logs") pod "1cb06733-7a89-4153-a16b-e69317c5f8a3" (UID: "1cb06733-7a89-4153-a16b-e69317c5f8a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.378047 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cb06733-7a89-4153-a16b-e69317c5f8a3-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.383273 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "82e23cb3-1af3-46a1-9805-8bf8579b1991" (UID: "82e23cb3-1af3-46a1-9805-8bf8579b1991"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.383398 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-scripts" (OuterVolumeSpecName: "scripts") pod "1cb06733-7a89-4153-a16b-e69317c5f8a3" (UID: "1cb06733-7a89-4153-a16b-e69317c5f8a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.383582 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb06733-7a89-4153-a16b-e69317c5f8a3-kube-api-access-g4hkh" (OuterVolumeSpecName: "kube-api-access-g4hkh") pod "1cb06733-7a89-4153-a16b-e69317c5f8a3" (UID: "1cb06733-7a89-4153-a16b-e69317c5f8a3"). InnerVolumeSpecName "kube-api-access-g4hkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.384498 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "82e23cb3-1af3-46a1-9805-8bf8579b1991" (UID: "82e23cb3-1af3-46a1-9805-8bf8579b1991"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.384938 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e23cb3-1af3-46a1-9805-8bf8579b1991-kube-api-access-7vtr2" (OuterVolumeSpecName: "kube-api-access-7vtr2") pod "82e23cb3-1af3-46a1-9805-8bf8579b1991" (UID: "82e23cb3-1af3-46a1-9805-8bf8579b1991"). InnerVolumeSpecName "kube-api-access-7vtr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.386015 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-scripts" (OuterVolumeSpecName: "scripts") pod "82e23cb3-1af3-46a1-9805-8bf8579b1991" (UID: "82e23cb3-1af3-46a1-9805-8bf8579b1991"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.401495 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cb06733-7a89-4153-a16b-e69317c5f8a3" (UID: "1cb06733-7a89-4153-a16b-e69317c5f8a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.404752 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-config-data" (OuterVolumeSpecName: "config-data") pod "1cb06733-7a89-4153-a16b-e69317c5f8a3" (UID: "1cb06733-7a89-4153-a16b-e69317c5f8a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.404788 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-config-data" (OuterVolumeSpecName: "config-data") pod "82e23cb3-1af3-46a1-9805-8bf8579b1991" (UID: "82e23cb3-1af3-46a1-9805-8bf8579b1991"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.406412 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82e23cb3-1af3-46a1-9805-8bf8579b1991" (UID: "82e23cb3-1af3-46a1-9805-8bf8579b1991"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479730 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479762 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479774 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cb06733-7a89-4153-a16b-e69317c5f8a3-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479784 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4hkh\" (UniqueName: \"kubernetes.io/projected/1cb06733-7a89-4153-a16b-e69317c5f8a3-kube-api-access-g4hkh\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479807 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vtr2\" (UniqueName: \"kubernetes.io/projected/82e23cb3-1af3-46a1-9805-8bf8579b1991-kube-api-access-7vtr2\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479816 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479823 4820 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479832 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479839 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.479847 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82e23cb3-1af3-46a1-9805-8bf8579b1991-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.906721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vd4hp" event={"ID":"82e23cb3-1af3-46a1-9805-8bf8579b1991","Type":"ContainerDied","Data":"ba34fa0d3e4061bf171dd381d98389357805191dbe2297528a208b2794306198"} Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.906761 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba34fa0d3e4061bf171dd381d98389357805191dbe2297528a208b2794306198" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.906803 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vd4hp" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.909034 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcf7fd8f-91f1-4742-bcdb-351da5ded25a","Type":"ContainerStarted","Data":"197e5a0c9cf004db3a4026c36aef07736111e6bebdae330ca478b195399c935e"} Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.910990 4820 generic.go:334] "Generic (PLEG): container finished" podID="8596fa26-8ba1-4348-8493-3df37f0cfcaa" containerID="a7d2a73a3aa9473404e974cca34cc6c012995d48099f1f8616bc193a93e55d6b" exitCode=0 Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.911076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zr8wv" event={"ID":"8596fa26-8ba1-4348-8493-3df37f0cfcaa","Type":"ContainerDied","Data":"a7d2a73a3aa9473404e974cca34cc6c012995d48099f1f8616bc193a93e55d6b"} Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.913960 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mg8xp" event={"ID":"1cb06733-7a89-4153-a16b-e69317c5f8a3","Type":"ContainerDied","Data":"dfe0a53388f6bbab268da9d2dd08d3c6358fcac2f97973839d9f6c1ba73b2451"} Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.914007 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe0a53388f6bbab268da9d2dd08d3c6358fcac2f97973839d9f6c1ba73b2451" Feb 01 14:38:11 crc kubenswrapper[4820]: I0201 14:38:11.914094 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mg8xp" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.036152 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7746bdf84d-qdnfk"] Feb 01 14:38:12 crc kubenswrapper[4820]: E0201 14:38:12.036545 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb06733-7a89-4153-a16b-e69317c5f8a3" containerName="placement-db-sync" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.036569 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb06733-7a89-4153-a16b-e69317c5f8a3" containerName="placement-db-sync" Feb 01 14:38:12 crc kubenswrapper[4820]: E0201 14:38:12.036584 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e23cb3-1af3-46a1-9805-8bf8579b1991" containerName="keystone-bootstrap" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.036592 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e23cb3-1af3-46a1-9805-8bf8579b1991" containerName="keystone-bootstrap" Feb 01 14:38:12 crc kubenswrapper[4820]: E0201 14:38:12.036619 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerName="init" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.036627 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerName="init" Feb 01 14:38:12 crc kubenswrapper[4820]: E0201 14:38:12.036640 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerName="dnsmasq-dns" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.036650 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerName="dnsmasq-dns" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.036839 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e23cb3-1af3-46a1-9805-8bf8579b1991" containerName="keystone-bootstrap" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.036853 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf74a95-6bc8-41b1-afb0-662e3a1c71b0" containerName="dnsmasq-dns" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.036868 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb06733-7a89-4153-a16b-e69317c5f8a3" containerName="placement-db-sync" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.041214 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.043855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.046425 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.046587 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5crjx" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.046731 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.046935 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.047829 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7746bdf84d-qdnfk"] Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.088381 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-config-data\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.088477 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/337df6da-71eb-4976-993f-b5c45e6ecdcc-logs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.088513 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-scripts\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.088635 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-internal-tls-certs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.088668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-combined-ca-bundle\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.088912 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df2ck\" (UniqueName: \"kubernetes.io/projected/337df6da-71eb-4976-993f-b5c45e6ecdcc-kube-api-access-df2ck\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.089002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-public-tls-certs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.098563 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bb96c9945-brtg8"] Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.100269 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.106214 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.106512 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.106610 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9xht9" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.106663 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.106886 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.106910 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.115822 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bb96c9945-brtg8"] Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.190581 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-combined-ca-bundle\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.190644 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-public-tls-certs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.190768 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-config-data\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.190838 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/337df6da-71eb-4976-993f-b5c45e6ecdcc-logs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.190915 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-scripts\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.190951 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-scripts\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191011 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-fernet-keys\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191055 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8cg\" (UniqueName: \"kubernetes.io/projected/42aa9479-729d-4c7d-a048-8db0d4434679-kube-api-access-wc8cg\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-internal-tls-certs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-credential-keys\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191225 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-combined-ca-bundle\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-config-data\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/337df6da-71eb-4976-993f-b5c45e6ecdcc-logs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191424 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-internal-tls-certs\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191658 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-public-tls-certs\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.191804 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df2ck\" (UniqueName: \"kubernetes.io/projected/337df6da-71eb-4976-993f-b5c45e6ecdcc-kube-api-access-df2ck\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.194822 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-internal-tls-certs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.195107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-scripts\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.195124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-config-data\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.195303 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-public-tls-certs\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.207528 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-combined-ca-bundle\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.210223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df2ck\" (UniqueName: \"kubernetes.io/projected/337df6da-71eb-4976-993f-b5c45e6ecdcc-kube-api-access-df2ck\") pod \"placement-7746bdf84d-qdnfk\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.293323 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-config-data\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.293721 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-internal-tls-certs\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.293748 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-public-tls-certs\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.293839 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-combined-ca-bundle\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.293917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-scripts\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.293951 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-fernet-keys\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.293971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc8cg\" (UniqueName: \"kubernetes.io/projected/42aa9479-729d-4c7d-a048-8db0d4434679-kube-api-access-wc8cg\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.294003 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-credential-keys\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.299163 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-combined-ca-bundle\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.299331 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-scripts\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.299571 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-config-data\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.299610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-credential-keys\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.300183 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-internal-tls-certs\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.301998 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-fernet-keys\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.304632 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42aa9479-729d-4c7d-a048-8db0d4434679-public-tls-certs\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.321515 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc8cg\" (UniqueName: \"kubernetes.io/projected/42aa9479-729d-4c7d-a048-8db0d4434679-kube-api-access-wc8cg\") pod \"keystone-7bb96c9945-brtg8\" (UID: \"42aa9479-729d-4c7d-a048-8db0d4434679\") " pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.359159 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.425634 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.831157 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7746bdf84d-qdnfk"] Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.923012 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7746bdf84d-qdnfk" event={"ID":"337df6da-71eb-4976-993f-b5c45e6ecdcc","Type":"ContainerStarted","Data":"10f93b0fb87a0aa171e63aa72de982898dd0243425bfc9dadc330c3be1c58443"} Feb 01 14:38:12 crc kubenswrapper[4820]: I0201 14:38:12.926433 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bb96c9945-brtg8"] Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.266753 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.411684 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-combined-ca-bundle\") pod \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.412021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-db-sync-config-data\") pod \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.412232 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7w5w\" (UniqueName: \"kubernetes.io/projected/8596fa26-8ba1-4348-8493-3df37f0cfcaa-kube-api-access-n7w5w\") pod \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\" (UID: \"8596fa26-8ba1-4348-8493-3df37f0cfcaa\") " Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.416304 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8596fa26-8ba1-4348-8493-3df37f0cfcaa-kube-api-access-n7w5w" (OuterVolumeSpecName: "kube-api-access-n7w5w") pod "8596fa26-8ba1-4348-8493-3df37f0cfcaa" (UID: "8596fa26-8ba1-4348-8493-3df37f0cfcaa"). InnerVolumeSpecName "kube-api-access-n7w5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.416443 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8596fa26-8ba1-4348-8493-3df37f0cfcaa" (UID: "8596fa26-8ba1-4348-8493-3df37f0cfcaa"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.514008 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7w5w\" (UniqueName: \"kubernetes.io/projected/8596fa26-8ba1-4348-8493-3df37f0cfcaa-kube-api-access-n7w5w\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.514046 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.574638 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8596fa26-8ba1-4348-8493-3df37f0cfcaa" (UID: "8596fa26-8ba1-4348-8493-3df37f0cfcaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.615847 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8596fa26-8ba1-4348-8493-3df37f0cfcaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.932711 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bb96c9945-brtg8" event={"ID":"42aa9479-729d-4c7d-a048-8db0d4434679","Type":"ContainerStarted","Data":"5895543acc8d6ad5a9a1c5bc41847f6b669022586f5f799003029670abfe8d5d"} Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.932754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bb96c9945-brtg8" event={"ID":"42aa9479-729d-4c7d-a048-8db0d4434679","Type":"ContainerStarted","Data":"fa902eb7c3301a61f9221e322273901231f3a4645d289827733260fed9c8e61d"} Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.932830 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.934718 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zr8wv" event={"ID":"8596fa26-8ba1-4348-8493-3df37f0cfcaa","Type":"ContainerDied","Data":"db5bdb7a511381742a42692b31e9b77c5674cd30861abf0d071bfa0d3726b61b"} Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.934747 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db5bdb7a511381742a42692b31e9b77c5674cd30861abf0d071bfa0d3726b61b" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.934733 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zr8wv" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.938775 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7746bdf84d-qdnfk" event={"ID":"337df6da-71eb-4976-993f-b5c45e6ecdcc","Type":"ContainerStarted","Data":"5408bee028f3417b93b70d66efdeb58a7c0cbc374a978faeed9ab1b73b44c848"} Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.938830 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7746bdf84d-qdnfk" event={"ID":"337df6da-71eb-4976-993f-b5c45e6ecdcc","Type":"ContainerStarted","Data":"9464f7cf92d57cfe6e206ee66467ccd7976651f763466d5b697503e55d13825d"} Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.938897 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.938932 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.956538 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7bb96c9945-brtg8" podStartSLOduration=1.956520955 podStartE2EDuration="1.956520955s" podCreationTimestamp="2026-02-01 14:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:13.954965307 +0000 UTC m=+1035.475331591" watchObservedRunningTime="2026-02-01 14:38:13.956520955 +0000 UTC m=+1035.476887239" Feb 01 14:38:13 crc kubenswrapper[4820]: I0201 14:38:13.984017 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7746bdf84d-qdnfk" podStartSLOduration=1.9839910440000001 podStartE2EDuration="1.983991044s" podCreationTimestamp="2026-02-01 14:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:13.975843966 +0000 UTC m=+1035.496210260" watchObservedRunningTime="2026-02-01 14:38:13.983991044 +0000 UTC m=+1035.504357338" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.231071 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-698cc4bfb6-j2k5c"] Feb 01 14:38:14 crc kubenswrapper[4820]: E0201 14:38:14.231434 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8596fa26-8ba1-4348-8493-3df37f0cfcaa" containerName="barbican-db-sync" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.231449 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8596fa26-8ba1-4348-8493-3df37f0cfcaa" containerName="barbican-db-sync" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.231584 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8596fa26-8ba1-4348-8493-3df37f0cfcaa" containerName="barbican-db-sync" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.237074 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.244888 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.244906 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.245215 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-54chm" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.255844 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-698cc4bfb6-j2k5c"] Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.266742 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6f46fc99f5-2w2vx"] Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.268047 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.270371 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.285309 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f46fc99f5-2w2vx"] Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.313391 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-h2nnx"] Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.315784 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.326747 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-h2nnx"] Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.327643 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-config-data\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.327753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-config-data-custom\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.327959 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlslv\" (UniqueName: \"kubernetes.io/projected/b32621b0-3167-4e95-bd7a-e34b45dca08e-kube-api-access-mlslv\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.328065 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjqzl\" (UniqueName: \"kubernetes.io/projected/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-kube-api-access-tjqzl\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.328089 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-config-data\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.328129 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b32621b0-3167-4e95-bd7a-e34b45dca08e-logs\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.328148 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-config-data-custom\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.328217 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-combined-ca-bundle\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.328233 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-logs\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.328260 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-combined-ca-bundle\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.419626 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-746455c778-hw955"] Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.421044 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.433987 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.436769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-config-data\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.436933 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-config-data-custom\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.437051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlslv\" (UniqueName: \"kubernetes.io/projected/b32621b0-3167-4e95-bd7a-e34b45dca08e-kube-api-access-mlslv\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.437214 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-config\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.437411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjqzl\" (UniqueName: \"kubernetes.io/projected/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-kube-api-access-tjqzl\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.437993 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-config-data\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438029 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438380 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b32621b0-3167-4e95-bd7a-e34b45dca08e-logs\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438409 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-config-data-custom\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438536 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-combined-ca-bundle\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438553 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-logs\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438702 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-combined-ca-bundle\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438735 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438866 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-dns-svc\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.438918 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck6bj\" (UniqueName: \"kubernetes.io/projected/0b8fff68-565a-4cdf-ac93-d29feede725c-kube-api-access-ck6bj\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.440122 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-logs\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.440709 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b32621b0-3167-4e95-bd7a-e34b45dca08e-logs\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.441037 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-config-data-custom\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.450255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-combined-ca-bundle\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.451599 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-combined-ca-bundle\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.452758 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-config-data-custom\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.452809 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32621b0-3167-4e95-bd7a-e34b45dca08e-config-data\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.460989 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-config-data\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.463831 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjqzl\" (UniqueName: \"kubernetes.io/projected/3bf7831d-dffe-4ed1-bfe4-94e787f63f67-kube-api-access-tjqzl\") pod \"barbican-keystone-listener-698cc4bfb6-j2k5c\" (UID: \"3bf7831d-dffe-4ed1-bfe4-94e787f63f67\") " pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.495277 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlslv\" (UniqueName: \"kubernetes.io/projected/b32621b0-3167-4e95-bd7a-e34b45dca08e-kube-api-access-mlslv\") pod \"barbican-worker-6f46fc99f5-2w2vx\" (UID: \"b32621b0-3167-4e95-bd7a-e34b45dca08e\") " pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.498583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-746455c778-hw955"] Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.542237 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck6bj\" (UniqueName: \"kubernetes.io/projected/0b8fff68-565a-4cdf-ac93-d29feede725c-kube-api-access-ck6bj\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.542644 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data-custom\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.542698 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-config\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.542727 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d446e958-fb74-4146-9dbd-ec4720536b9e-logs\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.542771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.542817 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.542858 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-combined-ca-bundle\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.542965 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvgg\" (UniqueName: \"kubernetes.io/projected/d446e958-fb74-4146-9dbd-ec4720536b9e-kube-api-access-6kvgg\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.543005 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.543027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-dns-svc\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.543977 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-dns-svc\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.544739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.544970 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.545228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-config\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.556589 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck6bj\" (UniqueName: \"kubernetes.io/projected/0b8fff68-565a-4cdf-ac93-d29feede725c-kube-api-access-ck6bj\") pod \"dnsmasq-dns-7f46f79845-h2nnx\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.568696 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.604727 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f46fc99f5-2w2vx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.644891 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data-custom\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.644977 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d446e958-fb74-4146-9dbd-ec4720536b9e-logs\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.645007 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.645052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-combined-ca-bundle\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.645088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kvgg\" (UniqueName: \"kubernetes.io/projected/d446e958-fb74-4146-9dbd-ec4720536b9e-kube-api-access-6kvgg\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.646288 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d446e958-fb74-4146-9dbd-ec4720536b9e-logs\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.650024 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data-custom\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.650458 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-combined-ca-bundle\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.652493 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.652590 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.665390 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kvgg\" (UniqueName: \"kubernetes.io/projected/d446e958-fb74-4146-9dbd-ec4720536b9e-kube-api-access-6kvgg\") pod \"barbican-api-746455c778-hw955\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:14 crc kubenswrapper[4820]: I0201 14:38:14.844266 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.019012 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f46fc99f5-2w2vx"] Feb 01 14:38:15 crc kubenswrapper[4820]: W0201 14:38:15.023032 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb32621b0_3167_4e95_bd7a_e34b45dca08e.slice/crio-c52b24ce4438e9ff1b4197da62596ad0fb3f5187385c1791fcffc1c0c11d61d5 WatchSource:0}: Error finding container c52b24ce4438e9ff1b4197da62596ad0fb3f5187385c1791fcffc1c0c11d61d5: Status 404 returned error can't find the container with id c52b24ce4438e9ff1b4197da62596ad0fb3f5187385c1791fcffc1c0c11d61d5 Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.120540 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-698cc4bfb6-j2k5c"] Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.352826 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-h2nnx"] Feb 01 14:38:15 crc kubenswrapper[4820]: W0201 14:38:15.367268 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b8fff68_565a_4cdf_ac93_d29feede725c.slice/crio-574b90a3e526ed65eae42fa04e04fab7d5b873f3716b858c9c61b9bbb1290fc1 WatchSource:0}: Error finding container 574b90a3e526ed65eae42fa04e04fab7d5b873f3716b858c9c61b9bbb1290fc1: Status 404 returned error can't find the container with id 574b90a3e526ed65eae42fa04e04fab7d5b873f3716b858c9c61b9bbb1290fc1 Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.429248 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-746455c778-hw955"] Feb 01 14:38:15 crc kubenswrapper[4820]: W0201 14:38:15.434734 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd446e958_fb74_4146_9dbd_ec4720536b9e.slice/crio-8019b212fd5740b9454b6057cb1033c8f35b7a164043091292f9c011111f788e WatchSource:0}: Error finding container 8019b212fd5740b9454b6057cb1033c8f35b7a164043091292f9c011111f788e: Status 404 returned error can't find the container with id 8019b212fd5740b9454b6057cb1033c8f35b7a164043091292f9c011111f788e Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.975908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-746455c778-hw955" event={"ID":"d446e958-fb74-4146-9dbd-ec4720536b9e","Type":"ContainerStarted","Data":"0ebce16cff1333d3d6dca89e21978509424e41e1fb668ab1353676d936244ba1"} Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.976276 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.976292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-746455c778-hw955" event={"ID":"d446e958-fb74-4146-9dbd-ec4720536b9e","Type":"ContainerStarted","Data":"c44f12df7794ddd18b9bb72de290241b6725a1dfe070dfcbd9075486023955af"} Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.976303 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-746455c778-hw955" event={"ID":"d446e958-fb74-4146-9dbd-ec4720536b9e","Type":"ContainerStarted","Data":"8019b212fd5740b9454b6057cb1033c8f35b7a164043091292f9c011111f788e"} Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.976437 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.980423 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f46fc99f5-2w2vx" event={"ID":"b32621b0-3167-4e95-bd7a-e34b45dca08e","Type":"ContainerStarted","Data":"c52b24ce4438e9ff1b4197da62596ad0fb3f5187385c1791fcffc1c0c11d61d5"} Feb 01 14:38:15 crc kubenswrapper[4820]: I0201 14:38:15.996187 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" event={"ID":"3bf7831d-dffe-4ed1-bfe4-94e787f63f67","Type":"ContainerStarted","Data":"8fc9c79e2567f76c8ac90ed30357f7302f153c0b2bf8cb69edecabb9f3dd10b0"} Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.013890 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-746455c778-hw955" podStartSLOduration=2.013855669 podStartE2EDuration="2.013855669s" podCreationTimestamp="2026-02-01 14:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:16.000013582 +0000 UTC m=+1037.520379876" watchObservedRunningTime="2026-02-01 14:38:16.013855669 +0000 UTC m=+1037.534221953" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.014769 4820 generic.go:334] "Generic (PLEG): container finished" podID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerID="b0f7f6e59647bdd432ddb2bf7c10db393c8fec412170402e6fa52975b5dd0c8f" exitCode=0 Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.014829 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" event={"ID":"0b8fff68-565a-4cdf-ac93-d29feede725c","Type":"ContainerDied","Data":"b0f7f6e59647bdd432ddb2bf7c10db393c8fec412170402e6fa52975b5dd0c8f"} Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.014916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" event={"ID":"0b8fff68-565a-4cdf-ac93-d29feede725c","Type":"ContainerStarted","Data":"574b90a3e526ed65eae42fa04e04fab7d5b873f3716b858c9c61b9bbb1290fc1"} Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.617077 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84cfb79c88-556g7"] Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.618685 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.620492 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.621169 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.637771 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84cfb79c88-556g7"] Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.689998 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-internal-tls-certs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.690045 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-public-tls-certs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.690093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-config-data\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.690124 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfzf\" (UniqueName: \"kubernetes.io/projected/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-kube-api-access-lnfzf\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.690156 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-config-data-custom\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.690178 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-logs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.690356 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-combined-ca-bundle\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.792941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-internal-tls-certs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.792997 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-public-tls-certs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.793071 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-config-data\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.793109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfzf\" (UniqueName: \"kubernetes.io/projected/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-kube-api-access-lnfzf\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.793137 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-config-data-custom\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.793164 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-logs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.793246 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-combined-ca-bundle\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.806920 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-logs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.813858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-public-tls-certs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.815107 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-combined-ca-bundle\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.815224 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-config-data\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.818559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-config-data-custom\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.821674 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bf6b5fdf4-gb9fp"] Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.823527 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.834478 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bf6b5fdf4-gb9fp"] Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.841477 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-internal-tls-certs\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.846371 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfzf\" (UniqueName: \"kubernetes.io/projected/3582e8fb-39ef-4974-ba08-1cf88f7fc83e-kube-api-access-lnfzf\") pod \"barbican-api-84cfb79c88-556g7\" (UID: \"3582e8fb-39ef-4974-ba08-1cf88f7fc83e\") " pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.894485 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-internal-tls-certs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.894563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-logs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.894690 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-combined-ca-bundle\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.894859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lmgq\" (UniqueName: \"kubernetes.io/projected/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-kube-api-access-8lmgq\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.894985 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-scripts\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.895027 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-public-tls-certs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.895143 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-config-data\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.939132 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.997179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lmgq\" (UniqueName: \"kubernetes.io/projected/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-kube-api-access-8lmgq\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.997240 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-scripts\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.997262 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-public-tls-certs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.997286 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-config-data\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.997330 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-internal-tls-certs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.997374 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-logs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.997407 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-combined-ca-bundle\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:16 crc kubenswrapper[4820]: I0201 14:38:16.998444 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-logs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.001928 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-combined-ca-bundle\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.003897 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-config-data\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.005285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-internal-tls-certs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.006672 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-scripts\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.008802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-public-tls-certs\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.020701 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lmgq\" (UniqueName: \"kubernetes.io/projected/ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5-kube-api-access-8lmgq\") pod \"placement-6bf6b5fdf4-gb9fp\" (UID: \"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5\") " pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.224884 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.500139 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bf6b5fdf4-gb9fp"] Feb 01 14:38:17 crc kubenswrapper[4820]: I0201 14:38:17.517346 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84cfb79c88-556g7"] Feb 01 14:38:18 crc kubenswrapper[4820]: I0201 14:38:18.053817 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f46fc99f5-2w2vx" event={"ID":"b32621b0-3167-4e95-bd7a-e34b45dca08e","Type":"ContainerStarted","Data":"a270c02b95af84a34e03c7766618c9f4d1f79d8d27bd4b2a44e90f61572db291"} Feb 01 14:38:18 crc kubenswrapper[4820]: I0201 14:38:18.056547 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" event={"ID":"3bf7831d-dffe-4ed1-bfe4-94e787f63f67","Type":"ContainerStarted","Data":"513cc8c0af56e3d1903cea16f5345dccec0206241d4de164a4b281bbaeb23a0a"} Feb 01 14:38:18 crc kubenswrapper[4820]: I0201 14:38:18.058974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" event={"ID":"0b8fff68-565a-4cdf-ac93-d29feede725c","Type":"ContainerStarted","Data":"76efab823dbc8e7d3b3258798efef254f9da33f2e3124206e0cdf2fdf80e38d4"} Feb 01 14:38:18 crc kubenswrapper[4820]: I0201 14:38:18.059495 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:18 crc kubenswrapper[4820]: I0201 14:38:18.079984 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" podStartSLOduration=4.079962367 podStartE2EDuration="4.079962367s" podCreationTimestamp="2026-02-01 14:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:18.073191882 +0000 UTC m=+1039.593558166" watchObservedRunningTime="2026-02-01 14:38:18.079962367 +0000 UTC m=+1039.600328651" Feb 01 14:38:20 crc kubenswrapper[4820]: I0201 14:38:20.075870 4820 generic.go:334] "Generic (PLEG): container finished" podID="5668430a-a444-4146-b357-30f626e2e9d6" containerID="aa0a5f8792601307a279d9ca93611812efbdfaba39271a4bb62827725f41699f" exitCode=0 Feb 01 14:38:20 crc kubenswrapper[4820]: I0201 14:38:20.075928 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-s4w47" event={"ID":"5668430a-a444-4146-b357-30f626e2e9d6","Type":"ContainerDied","Data":"aa0a5f8792601307a279d9ca93611812efbdfaba39271a4bb62827725f41699f"} Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.086338 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bf6b5fdf4-gb9fp" event={"ID":"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5","Type":"ContainerStarted","Data":"ffccc4375d06df18e48dbbafc8b426ac7b2142153f4f70de5f9de7f407264ed2"} Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.088125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84cfb79c88-556g7" event={"ID":"3582e8fb-39ef-4974-ba08-1cf88f7fc83e","Type":"ContainerStarted","Data":"a68c935553c10a532595e82c113b764690afbd7664a49c9d7dc7d1ebd8c7aa26"} Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.359975 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-s4w47" Feb 01 14:38:21 crc kubenswrapper[4820]: E0201 14:38:21.399828 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.488607 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-config\") pod \"5668430a-a444-4146-b357-30f626e2e9d6\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.488698 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvthc\" (UniqueName: \"kubernetes.io/projected/5668430a-a444-4146-b357-30f626e2e9d6-kube-api-access-tvthc\") pod \"5668430a-a444-4146-b357-30f626e2e9d6\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.488942 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-combined-ca-bundle\") pod \"5668430a-a444-4146-b357-30f626e2e9d6\" (UID: \"5668430a-a444-4146-b357-30f626e2e9d6\") " Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.495069 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5668430a-a444-4146-b357-30f626e2e9d6-kube-api-access-tvthc" (OuterVolumeSpecName: "kube-api-access-tvthc") pod "5668430a-a444-4146-b357-30f626e2e9d6" (UID: "5668430a-a444-4146-b357-30f626e2e9d6"). InnerVolumeSpecName "kube-api-access-tvthc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.520965 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-config" (OuterVolumeSpecName: "config") pod "5668430a-a444-4146-b357-30f626e2e9d6" (UID: "5668430a-a444-4146-b357-30f626e2e9d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.530570 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5668430a-a444-4146-b357-30f626e2e9d6" (UID: "5668430a-a444-4146-b357-30f626e2e9d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.591165 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.591343 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5668430a-a444-4146-b357-30f626e2e9d6-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:21 crc kubenswrapper[4820]: I0201 14:38:21.591358 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvthc\" (UniqueName: \"kubernetes.io/projected/5668430a-a444-4146-b357-30f626e2e9d6-kube-api-access-tvthc\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.098426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f46fc99f5-2w2vx" event={"ID":"b32621b0-3167-4e95-bd7a-e34b45dca08e","Type":"ContainerStarted","Data":"844373f8ced3016eb37c600bc1976cf175ce7ba34e5d912d8fd5a0976ae2324f"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.105149 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wpqvp" event={"ID":"857bc684-4d17-461f-9183-6c0a7ac89845","Type":"ContainerStarted","Data":"5244e6bbf496ccea2ca747b01093f042b2d370a7e255bb50c57b39c6e4e75982"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.111726 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcf7fd8f-91f1-4742-bcdb-351da5ded25a","Type":"ContainerStarted","Data":"fd3fc9e578d977e62cf38f77e44587ad941d0e227b1abd9e445cc5369537e9ee"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.112008 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="ceilometer-notification-agent" containerID="cri-o://99b5bbb4f8a5a494992d1807c34ffd7c227efa7601613ac7f4adbaabad0f1718" gracePeriod=30 Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.112319 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.112380 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="proxy-httpd" containerID="cri-o://fd3fc9e578d977e62cf38f77e44587ad941d0e227b1abd9e445cc5369537e9ee" gracePeriod=30 Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.112436 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="sg-core" containerID="cri-o://197e5a0c9cf004db3a4026c36aef07736111e6bebdae330ca478b195399c935e" gracePeriod=30 Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.140203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84cfb79c88-556g7" event={"ID":"3582e8fb-39ef-4974-ba08-1cf88f7fc83e","Type":"ContainerStarted","Data":"4f1a9522013c10f8f9550a3c2912dc6a824753eea1ceca96510814bbc23d75d0"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.140247 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84cfb79c88-556g7" event={"ID":"3582e8fb-39ef-4974-ba08-1cf88f7fc83e","Type":"ContainerStarted","Data":"009f6385704059faba9489b8cec6279d82cedd04f906ea9229f036d553536e35"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.166332 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.166410 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.187738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" event={"ID":"3bf7831d-dffe-4ed1-bfe4-94e787f63f67","Type":"ContainerStarted","Data":"11bf5f3274faff60b58089ada9bee2da5a1160d6b93780290a3add5b8b86576d"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.188440 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6f46fc99f5-2w2vx" podStartSLOduration=6.148957555 podStartE2EDuration="8.188418873s" podCreationTimestamp="2026-02-01 14:38:14 +0000 UTC" firstStartedPulling="2026-02-01 14:38:15.025065729 +0000 UTC m=+1036.545432013" lastFinishedPulling="2026-02-01 14:38:17.064527047 +0000 UTC m=+1038.584893331" observedRunningTime="2026-02-01 14:38:22.126023863 +0000 UTC m=+1043.646390177" watchObservedRunningTime="2026-02-01 14:38:22.188418873 +0000 UTC m=+1043.708785157" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.201566 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-s4w47" event={"ID":"5668430a-a444-4146-b357-30f626e2e9d6","Type":"ContainerDied","Data":"52b7f0bb2e1473e494933831f142f37602b31266a6cec12930732f1f40093552"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.201607 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52b7f0bb2e1473e494933831f142f37602b31266a6cec12930732f1f40093552" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.201691 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-s4w47" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.206657 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bf6b5fdf4-gb9fp" event={"ID":"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5","Type":"ContainerStarted","Data":"ab1e012c6f26bd1fdad896a5876bef62a7731895b59aeab03422eefef1c4ddfa"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.206717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bf6b5fdf4-gb9fp" event={"ID":"ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5","Type":"ContainerStarted","Data":"38c646bd63780faa020345e1ad7b94c801c3a253ce71611ae454d0a6ca8b7d5d"} Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.207576 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.207613 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.212029 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wpqvp" podStartSLOduration=2.347252024 podStartE2EDuration="41.212007837s" podCreationTimestamp="2026-02-01 14:37:41 +0000 UTC" firstStartedPulling="2026-02-01 14:37:42.140329341 +0000 UTC m=+1003.660695625" lastFinishedPulling="2026-02-01 14:38:21.005085154 +0000 UTC m=+1042.525451438" observedRunningTime="2026-02-01 14:38:22.170534067 +0000 UTC m=+1043.690900351" watchObservedRunningTime="2026-02-01 14:38:22.212007837 +0000 UTC m=+1043.732374121" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.250603 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84cfb79c88-556g7" podStartSLOduration=6.250576626 podStartE2EDuration="6.250576626s" podCreationTimestamp="2026-02-01 14:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:22.223895036 +0000 UTC m=+1043.744261320" watchObservedRunningTime="2026-02-01 14:38:22.250576626 +0000 UTC m=+1043.770942910" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.314306 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-698cc4bfb6-j2k5c" podStartSLOduration=6.355895984 podStartE2EDuration="8.314288728s" podCreationTimestamp="2026-02-01 14:38:14 +0000 UTC" firstStartedPulling="2026-02-01 14:38:15.129729867 +0000 UTC m=+1036.650096151" lastFinishedPulling="2026-02-01 14:38:17.088122611 +0000 UTC m=+1038.608488895" observedRunningTime="2026-02-01 14:38:22.247129773 +0000 UTC m=+1043.767496057" watchObservedRunningTime="2026-02-01 14:38:22.314288728 +0000 UTC m=+1043.834655002" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.329446 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bf6b5fdf4-gb9fp" podStartSLOduration=6.329427177 podStartE2EDuration="6.329427177s" podCreationTimestamp="2026-02-01 14:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:22.282404392 +0000 UTC m=+1043.802770676" watchObservedRunningTime="2026-02-01 14:38:22.329427177 +0000 UTC m=+1043.849793461" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.409994 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-h2nnx"] Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.410237 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" podUID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerName="dnsmasq-dns" containerID="cri-o://76efab823dbc8e7d3b3258798efef254f9da33f2e3124206e0cdf2fdf80e38d4" gracePeriod=10 Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.412607 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.445236 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-wl585"] Feb 01 14:38:22 crc kubenswrapper[4820]: E0201 14:38:22.445655 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5668430a-a444-4146-b357-30f626e2e9d6" containerName="neutron-db-sync" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.445681 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5668430a-a444-4146-b357-30f626e2e9d6" containerName="neutron-db-sync" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.445858 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5668430a-a444-4146-b357-30f626e2e9d6" containerName="neutron-db-sync" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.446811 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.501221 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-wl585"] Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.510905 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-config\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.510964 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.511042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-dns-svc\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.511091 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlq8\" (UniqueName: \"kubernetes.io/projected/e8f43c3c-32c5-4636-bbc7-21abde903f86-kube-api-access-bvlq8\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.512296 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.553614 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8466455f76-tpjsw"] Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.557707 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.560043 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8466455f76-tpjsw"] Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.566685 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.566791 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-n6mrb" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.567040 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.567179 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616147 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-config\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616236 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-config\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616304 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-dns-svc\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616325 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-ovndb-tls-certs\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616392 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlq8\" (UniqueName: \"kubernetes.io/projected/e8f43c3c-32c5-4636-bbc7-21abde903f86-kube-api-access-bvlq8\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616429 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616445 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-httpd-config\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzww\" (UniqueName: \"kubernetes.io/projected/5518d4cc-191a-448e-8b7b-adc5ddfe587b-kube-api-access-8nzww\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.616497 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-combined-ca-bundle\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.617770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-config\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.618387 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.619025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.619138 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-dns-svc\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.673995 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlq8\" (UniqueName: \"kubernetes.io/projected/e8f43c3c-32c5-4636-bbc7-21abde903f86-kube-api-access-bvlq8\") pod \"dnsmasq-dns-869f779d85-wl585\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.717806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-ovndb-tls-certs\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.717941 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-httpd-config\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.717997 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nzww\" (UniqueName: \"kubernetes.io/projected/5518d4cc-191a-448e-8b7b-adc5ddfe587b-kube-api-access-8nzww\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.718030 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-combined-ca-bundle\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.718061 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-config\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.723690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-combined-ca-bundle\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.726476 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-httpd-config\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.727885 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-config\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.735162 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-ovndb-tls-certs\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.747783 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nzww\" (UniqueName: \"kubernetes.io/projected/5518d4cc-191a-448e-8b7b-adc5ddfe587b-kube-api-access-8nzww\") pod \"neutron-8466455f76-tpjsw\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.815949 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:22 crc kubenswrapper[4820]: I0201 14:38:22.931245 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:23 crc kubenswrapper[4820]: I0201 14:38:23.217548 4820 generic.go:334] "Generic (PLEG): container finished" podID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerID="76efab823dbc8e7d3b3258798efef254f9da33f2e3124206e0cdf2fdf80e38d4" exitCode=0 Feb 01 14:38:23 crc kubenswrapper[4820]: I0201 14:38:23.217648 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" event={"ID":"0b8fff68-565a-4cdf-ac93-d29feede725c","Type":"ContainerDied","Data":"76efab823dbc8e7d3b3258798efef254f9da33f2e3124206e0cdf2fdf80e38d4"} Feb 01 14:38:23 crc kubenswrapper[4820]: I0201 14:38:23.220521 4820 generic.go:334] "Generic (PLEG): container finished" podID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerID="fd3fc9e578d977e62cf38f77e44587ad941d0e227b1abd9e445cc5369537e9ee" exitCode=0 Feb 01 14:38:23 crc kubenswrapper[4820]: I0201 14:38:23.220550 4820 generic.go:334] "Generic (PLEG): container finished" podID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerID="197e5a0c9cf004db3a4026c36aef07736111e6bebdae330ca478b195399c935e" exitCode=2 Feb 01 14:38:23 crc kubenswrapper[4820]: I0201 14:38:23.220635 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcf7fd8f-91f1-4742-bcdb-351da5ded25a","Type":"ContainerDied","Data":"fd3fc9e578d977e62cf38f77e44587ad941d0e227b1abd9e445cc5369537e9ee"} Feb 01 14:38:23 crc kubenswrapper[4820]: I0201 14:38:23.220673 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcf7fd8f-91f1-4742-bcdb-351da5ded25a","Type":"ContainerDied","Data":"197e5a0c9cf004db3a4026c36aef07736111e6bebdae330ca478b195399c935e"} Feb 01 14:38:23 crc kubenswrapper[4820]: I0201 14:38:23.319781 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-wl585"] Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.229968 4820 generic.go:334] "Generic (PLEG): container finished" podID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerID="99b5bbb4f8a5a494992d1807c34ffd7c227efa7601613ac7f4adbaabad0f1718" exitCode=0 Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.230041 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcf7fd8f-91f1-4742-bcdb-351da5ded25a","Type":"ContainerDied","Data":"99b5bbb4f8a5a494992d1807c34ffd7c227efa7601613ac7f4adbaabad0f1718"} Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.231402 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-wl585" event={"ID":"e8f43c3c-32c5-4636-bbc7-21abde903f86","Type":"ContainerStarted","Data":"fac67dd949ff9f160d1270ea0476cfd47ccef46b61789de9324ea8e2c6963718"} Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.568482 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8bc6c8777-j2rkq"] Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.571120 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.573369 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.591469 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.625113 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8bc6c8777-j2rkq"] Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.654311 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" podUID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.660950 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-ovndb-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.661030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-public-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.661066 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-config\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.661155 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj4t5\" (UniqueName: \"kubernetes.io/projected/6e78848a-f345-4cf8-a149-4d2a3d6de52e-kube-api-access-pj4t5\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.661178 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-internal-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.661222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-httpd-config\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.661255 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-combined-ca-bundle\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.765816 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4t5\" (UniqueName: \"kubernetes.io/projected/6e78848a-f345-4cf8-a149-4d2a3d6de52e-kube-api-access-pj4t5\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.766771 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-internal-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.766996 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-httpd-config\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.767101 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-combined-ca-bundle\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.767216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-ovndb-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.767331 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-public-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.767410 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-config\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.795936 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-ovndb-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.796720 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-config\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.798957 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-httpd-config\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.800253 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-public-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.800801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-internal-tls-certs\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.805915 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4t5\" (UniqueName: \"kubernetes.io/projected/6e78848a-f345-4cf8-a149-4d2a3d6de52e-kube-api-access-pj4t5\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.806550 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e78848a-f345-4cf8-a149-4d2a3d6de52e-combined-ca-bundle\") pod \"neutron-8bc6c8777-j2rkq\" (UID: \"6e78848a-f345-4cf8-a149-4d2a3d6de52e\") " pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:24 crc kubenswrapper[4820]: I0201 14:38:24.905032 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.251225 4820 generic.go:334] "Generic (PLEG): container finished" podID="e8f43c3c-32c5-4636-bbc7-21abde903f86" containerID="da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b" exitCode=0 Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.251454 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-wl585" event={"ID":"e8f43c3c-32c5-4636-bbc7-21abde903f86","Type":"ContainerDied","Data":"da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b"} Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.416501 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8466455f76-tpjsw"] Feb 01 14:38:25 crc kubenswrapper[4820]: W0201 14:38:25.533104 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e78848a_f345_4cf8_a149_4d2a3d6de52e.slice/crio-4ed21da19e73d6512749b2f3cd2f30a5a01be405c4d47f185c6482a4fe1a1dc4 WatchSource:0}: Error finding container 4ed21da19e73d6512749b2f3cd2f30a5a01be405c4d47f185c6482a4fe1a1dc4: Status 404 returned error can't find the container with id 4ed21da19e73d6512749b2f3cd2f30a5a01be405c4d47f185c6482a4fe1a1dc4 Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.544441 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.552722 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.558168 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8bc6c8777-j2rkq"] Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.595636 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-sg-core-conf-yaml\") pod \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.595912 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-run-httpd\") pod \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596041 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-combined-ca-bundle\") pod \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596137 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck6bj\" (UniqueName: \"kubernetes.io/projected/0b8fff68-565a-4cdf-ac93-d29feede725c-kube-api-access-ck6bj\") pod \"0b8fff68-565a-4cdf-ac93-d29feede725c\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596245 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qspv5\" (UniqueName: \"kubernetes.io/projected/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-kube-api-access-qspv5\") pod \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596378 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-config\") pod \"0b8fff68-565a-4cdf-ac93-d29feede725c\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596486 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-sb\") pod \"0b8fff68-565a-4cdf-ac93-d29feede725c\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596575 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-nb\") pod \"0b8fff68-565a-4cdf-ac93-d29feede725c\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596686 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-scripts\") pod \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596787 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-dns-svc\") pod \"0b8fff68-565a-4cdf-ac93-d29feede725c\" (UID: \"0b8fff68-565a-4cdf-ac93-d29feede725c\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.596908 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-log-httpd\") pod \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.597036 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-config-data\") pod \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\" (UID: \"dcf7fd8f-91f1-4742-bcdb-351da5ded25a\") " Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.598081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dcf7fd8f-91f1-4742-bcdb-351da5ded25a" (UID: "dcf7fd8f-91f1-4742-bcdb-351da5ded25a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.607037 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b8fff68-565a-4cdf-ac93-d29feede725c-kube-api-access-ck6bj" (OuterVolumeSpecName: "kube-api-access-ck6bj") pod "0b8fff68-565a-4cdf-ac93-d29feede725c" (UID: "0b8fff68-565a-4cdf-ac93-d29feede725c"). InnerVolumeSpecName "kube-api-access-ck6bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.610979 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dcf7fd8f-91f1-4742-bcdb-351da5ded25a" (UID: "dcf7fd8f-91f1-4742-bcdb-351da5ded25a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.625081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-kube-api-access-qspv5" (OuterVolumeSpecName: "kube-api-access-qspv5") pod "dcf7fd8f-91f1-4742-bcdb-351da5ded25a" (UID: "dcf7fd8f-91f1-4742-bcdb-351da5ded25a"). InnerVolumeSpecName "kube-api-access-qspv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.640295 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-scripts" (OuterVolumeSpecName: "scripts") pod "dcf7fd8f-91f1-4742-bcdb-351da5ded25a" (UID: "dcf7fd8f-91f1-4742-bcdb-351da5ded25a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.670653 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b8fff68-565a-4cdf-ac93-d29feede725c" (UID: "0b8fff68-565a-4cdf-ac93-d29feede725c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.690528 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dcf7fd8f-91f1-4742-bcdb-351da5ded25a" (UID: "dcf7fd8f-91f1-4742-bcdb-351da5ded25a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.690634 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcf7fd8f-91f1-4742-bcdb-351da5ded25a" (UID: "dcf7fd8f-91f1-4742-bcdb-351da5ded25a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.690939 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0b8fff68-565a-4cdf-ac93-d29feede725c" (UID: "0b8fff68-565a-4cdf-ac93-d29feede725c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.692337 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0b8fff68-565a-4cdf-ac93-d29feede725c" (UID: "0b8fff68-565a-4cdf-ac93-d29feede725c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699316 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699355 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699370 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699382 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699395 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699410 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699424 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699435 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699447 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck6bj\" (UniqueName: \"kubernetes.io/projected/0b8fff68-565a-4cdf-ac93-d29feede725c-kube-api-access-ck6bj\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.699461 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qspv5\" (UniqueName: \"kubernetes.io/projected/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-kube-api-access-qspv5\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.714738 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-config" (OuterVolumeSpecName: "config") pod "0b8fff68-565a-4cdf-ac93-d29feede725c" (UID: "0b8fff68-565a-4cdf-ac93-d29feede725c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.749040 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-config-data" (OuterVolumeSpecName: "config-data") pod "dcf7fd8f-91f1-4742-bcdb-351da5ded25a" (UID: "dcf7fd8f-91f1-4742-bcdb-351da5ded25a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.800671 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8fff68-565a-4cdf-ac93-d29feede725c-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:25 crc kubenswrapper[4820]: I0201 14:38:25.800703 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf7fd8f-91f1-4742-bcdb-351da5ded25a-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.263753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-wl585" event={"ID":"e8f43c3c-32c5-4636-bbc7-21abde903f86","Type":"ContainerStarted","Data":"b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.264255 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.266658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcf7fd8f-91f1-4742-bcdb-351da5ded25a","Type":"ContainerDied","Data":"e2c32e105b37fbd29c13e786a7280a462b7f2fc37cf7acefc7473d5be1f93646"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.266711 4820 scope.go:117] "RemoveContainer" containerID="fd3fc9e578d977e62cf38f77e44587ad941d0e227b1abd9e445cc5369537e9ee" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.266841 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.271679 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8466455f76-tpjsw" event={"ID":"5518d4cc-191a-448e-8b7b-adc5ddfe587b","Type":"ContainerStarted","Data":"40f58b40519c58dd81677771afb561ab72657721f0e8be248de46750194b4745"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.271707 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8466455f76-tpjsw" event={"ID":"5518d4cc-191a-448e-8b7b-adc5ddfe587b","Type":"ContainerStarted","Data":"af4b3c1226e0d0dddba0410c80616963139422b24675727d313dffa36b17bd07"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.271717 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8466455f76-tpjsw" event={"ID":"5518d4cc-191a-448e-8b7b-adc5ddfe587b","Type":"ContainerStarted","Data":"d991a29cec5528aa23b64309f0770824817b14d171a8bba3455a233c972bbda6"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.272165 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.273734 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" event={"ID":"0b8fff68-565a-4cdf-ac93-d29feede725c","Type":"ContainerDied","Data":"574b90a3e526ed65eae42fa04e04fab7d5b873f3716b858c9c61b9bbb1290fc1"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.273790 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-h2nnx" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.281410 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bc6c8777-j2rkq" event={"ID":"6e78848a-f345-4cf8-a149-4d2a3d6de52e","Type":"ContainerStarted","Data":"62a43b27591007066b532edf267610842c703c6dbdc550f4830b3b48ff877908"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.281475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bc6c8777-j2rkq" event={"ID":"6e78848a-f345-4cf8-a149-4d2a3d6de52e","Type":"ContainerStarted","Data":"35a5ded5cd35cdf1fc751a348e6efbbe62d71986a411fec69bcb05a591ba56ae"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.281490 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8bc6c8777-j2rkq" event={"ID":"6e78848a-f345-4cf8-a149-4d2a3d6de52e","Type":"ContainerStarted","Data":"4ed21da19e73d6512749b2f3cd2f30a5a01be405c4d47f185c6482a4fe1a1dc4"} Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.281868 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.296122 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-wl585" podStartSLOduration=4.296100741 podStartE2EDuration="4.296100741s" podCreationTimestamp="2026-02-01 14:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:26.288168347 +0000 UTC m=+1047.808534631" watchObservedRunningTime="2026-02-01 14:38:26.296100741 +0000 UTC m=+1047.816467015" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.309083 4820 scope.go:117] "RemoveContainer" containerID="197e5a0c9cf004db3a4026c36aef07736111e6bebdae330ca478b195399c935e" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.316681 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8bc6c8777-j2rkq" podStartSLOduration=2.316664671 podStartE2EDuration="2.316664671s" podCreationTimestamp="2026-02-01 14:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:26.308081692 +0000 UTC m=+1047.828447976" watchObservedRunningTime="2026-02-01 14:38:26.316664671 +0000 UTC m=+1047.837030955" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.340248 4820 scope.go:117] "RemoveContainer" containerID="99b5bbb4f8a5a494992d1807c34ffd7c227efa7601613ac7f4adbaabad0f1718" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.374242 4820 scope.go:117] "RemoveContainer" containerID="76efab823dbc8e7d3b3258798efef254f9da33f2e3124206e0cdf2fdf80e38d4" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.379144 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8466455f76-tpjsw" podStartSLOduration=4.3791238230000005 podStartE2EDuration="4.379123823s" podCreationTimestamp="2026-02-01 14:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:26.361964615 +0000 UTC m=+1047.882330899" watchObservedRunningTime="2026-02-01 14:38:26.379123823 +0000 UTC m=+1047.899490107" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.397778 4820 scope.go:117] "RemoveContainer" containerID="b0f7f6e59647bdd432ddb2bf7c10db393c8fec412170402e6fa52975b5dd0c8f" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.478025 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.498832 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.514048 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.546762 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-h2nnx"] Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.550298 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-h2nnx"] Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561128 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:26 crc kubenswrapper[4820]: E0201 14:38:26.561532 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="ceilometer-notification-agent" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561549 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="ceilometer-notification-agent" Feb 01 14:38:26 crc kubenswrapper[4820]: E0201 14:38:26.561566 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerName="init" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561572 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerName="init" Feb 01 14:38:26 crc kubenswrapper[4820]: E0201 14:38:26.561590 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="sg-core" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561601 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="sg-core" Feb 01 14:38:26 crc kubenswrapper[4820]: E0201 14:38:26.561630 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="proxy-httpd" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561636 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="proxy-httpd" Feb 01 14:38:26 crc kubenswrapper[4820]: E0201 14:38:26.561646 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerName="dnsmasq-dns" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561654 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerName="dnsmasq-dns" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561807 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b8fff68-565a-4cdf-ac93-d29feede725c" containerName="dnsmasq-dns" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561820 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="ceilometer-notification-agent" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561829 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="sg-core" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.561840 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" containerName="proxy-httpd" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.563518 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.567702 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.568592 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.569769 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.618762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.618819 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-config-data\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.619143 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-run-httpd\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.619210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c674s\" (UniqueName: \"kubernetes.io/projected/d8255e6c-59ea-4449-bcec-264a12bf6d6e-kube-api-access-c674s\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.619250 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-scripts\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.619307 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.619459 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-log-httpd\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.682580 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.721273 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-run-httpd\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.721317 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c674s\" (UniqueName: \"kubernetes.io/projected/d8255e6c-59ea-4449-bcec-264a12bf6d6e-kube-api-access-c674s\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.721340 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-scripts\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.721378 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.721430 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-log-httpd\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.721486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.721506 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-config-data\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.722075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-log-httpd\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.722487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-run-httpd\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.730605 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-scripts\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.737935 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-config-data\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.738545 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.744525 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.748261 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c674s\" (UniqueName: \"kubernetes.io/projected/d8255e6c-59ea-4449-bcec-264a12bf6d6e-kube-api-access-c674s\") pod \"ceilometer-0\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " pod="openstack/ceilometer-0" Feb 01 14:38:26 crc kubenswrapper[4820]: I0201 14:38:26.880189 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:38:27 crc kubenswrapper[4820]: I0201 14:38:27.209995 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b8fff68-565a-4cdf-ac93-d29feede725c" path="/var/lib/kubelet/pods/0b8fff68-565a-4cdf-ac93-d29feede725c/volumes" Feb 01 14:38:27 crc kubenswrapper[4820]: I0201 14:38:27.210952 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf7fd8f-91f1-4742-bcdb-351da5ded25a" path="/var/lib/kubelet/pods/dcf7fd8f-91f1-4742-bcdb-351da5ded25a/volumes" Feb 01 14:38:27 crc kubenswrapper[4820]: W0201 14:38:27.336462 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8255e6c_59ea_4449_bcec_264a12bf6d6e.slice/crio-76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c WatchSource:0}: Error finding container 76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c: Status 404 returned error can't find the container with id 76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c Feb 01 14:38:27 crc kubenswrapper[4820]: I0201 14:38:27.340826 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.300048 4820 generic.go:334] "Generic (PLEG): container finished" podID="857bc684-4d17-461f-9183-6c0a7ac89845" containerID="5244e6bbf496ccea2ca747b01093f042b2d370a7e255bb50c57b39c6e4e75982" exitCode=0 Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.300155 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wpqvp" event={"ID":"857bc684-4d17-461f-9183-6c0a7ac89845","Type":"ContainerDied","Data":"5244e6bbf496ccea2ca747b01093f042b2d370a7e255bb50c57b39c6e4e75982"} Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.301954 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerStarted","Data":"7cbd242453c301fa3a31d5cacccb321c74f47f5a345754b05a5f2e825fff03e7"} Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.301986 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerStarted","Data":"76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c"} Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.528197 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.762161 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84cfb79c88-556g7" Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.877315 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-746455c778-hw955"] Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.878356 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-746455c778-hw955" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api-log" containerID="cri-o://c44f12df7794ddd18b9bb72de290241b6725a1dfe070dfcbd9075486023955af" gracePeriod=30 Feb 01 14:38:28 crc kubenswrapper[4820]: I0201 14:38:28.878920 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-746455c778-hw955" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api" containerID="cri-o://0ebce16cff1333d3d6dca89e21978509424e41e1fb668ab1353676d936244ba1" gracePeriod=30 Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.310763 4820 generic.go:334] "Generic (PLEG): container finished" podID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerID="c44f12df7794ddd18b9bb72de290241b6725a1dfe070dfcbd9075486023955af" exitCode=143 Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.311078 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-746455c778-hw955" event={"ID":"d446e958-fb74-4146-9dbd-ec4720536b9e","Type":"ContainerDied","Data":"c44f12df7794ddd18b9bb72de290241b6725a1dfe070dfcbd9075486023955af"} Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.313950 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerStarted","Data":"ff83f9d864b935236ca972a88ea10cd2856a7f045d007f6e9b1df79a178b0152"} Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.681232 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.844553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-scripts\") pod \"857bc684-4d17-461f-9183-6c0a7ac89845\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.844616 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-combined-ca-bundle\") pod \"857bc684-4d17-461f-9183-6c0a7ac89845\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.844635 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/857bc684-4d17-461f-9183-6c0a7ac89845-etc-machine-id\") pod \"857bc684-4d17-461f-9183-6c0a7ac89845\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.844707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg8bz\" (UniqueName: \"kubernetes.io/projected/857bc684-4d17-461f-9183-6c0a7ac89845-kube-api-access-jg8bz\") pod \"857bc684-4d17-461f-9183-6c0a7ac89845\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.844778 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-config-data\") pod \"857bc684-4d17-461f-9183-6c0a7ac89845\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.844848 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-db-sync-config-data\") pod \"857bc684-4d17-461f-9183-6c0a7ac89845\" (UID: \"857bc684-4d17-461f-9183-6c0a7ac89845\") " Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.845846 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/857bc684-4d17-461f-9183-6c0a7ac89845-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "857bc684-4d17-461f-9183-6c0a7ac89845" (UID: "857bc684-4d17-461f-9183-6c0a7ac89845"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.850016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "857bc684-4d17-461f-9183-6c0a7ac89845" (UID: "857bc684-4d17-461f-9183-6c0a7ac89845"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.851621 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-scripts" (OuterVolumeSpecName: "scripts") pod "857bc684-4d17-461f-9183-6c0a7ac89845" (UID: "857bc684-4d17-461f-9183-6c0a7ac89845"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.856112 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857bc684-4d17-461f-9183-6c0a7ac89845-kube-api-access-jg8bz" (OuterVolumeSpecName: "kube-api-access-jg8bz") pod "857bc684-4d17-461f-9183-6c0a7ac89845" (UID: "857bc684-4d17-461f-9183-6c0a7ac89845"). InnerVolumeSpecName "kube-api-access-jg8bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.880915 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "857bc684-4d17-461f-9183-6c0a7ac89845" (UID: "857bc684-4d17-461f-9183-6c0a7ac89845"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.898983 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-config-data" (OuterVolumeSpecName: "config-data") pod "857bc684-4d17-461f-9183-6c0a7ac89845" (UID: "857bc684-4d17-461f-9183-6c0a7ac89845"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.946586 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.946622 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/857bc684-4d17-461f-9183-6c0a7ac89845-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.946632 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg8bz\" (UniqueName: \"kubernetes.io/projected/857bc684-4d17-461f-9183-6c0a7ac89845-kube-api-access-jg8bz\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.946640 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.946649 4820 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:29 crc kubenswrapper[4820]: I0201 14:38:29.946657 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/857bc684-4d17-461f-9183-6c0a7ac89845-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.327654 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerStarted","Data":"47e65daeb231c7c2ad17b8e7f0b3aff16b728fbd33206931915d3352cced240c"} Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.329954 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wpqvp" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.330084 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wpqvp" event={"ID":"857bc684-4d17-461f-9183-6c0a7ac89845","Type":"ContainerDied","Data":"e8f12a4786490ef01678bb0b9959661b5af49ed7449338204316b90bd47cbbaf"} Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.330179 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f12a4786490ef01678bb0b9959661b5af49ed7449338204316b90bd47cbbaf" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.613793 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:30 crc kubenswrapper[4820]: E0201 14:38:30.621258 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857bc684-4d17-461f-9183-6c0a7ac89845" containerName="cinder-db-sync" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.621297 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="857bc684-4d17-461f-9183-6c0a7ac89845" containerName="cinder-db-sync" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.621542 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="857bc684-4d17-461f-9183-6c0a7ac89845" containerName="cinder-db-sync" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.622526 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.627859 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.628237 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.628435 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.628571 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2r5j4" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.638095 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.713023 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-wl585"] Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.713256 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-wl585" podUID="e8f43c3c-32c5-4636-bbc7-21abde903f86" containerName="dnsmasq-dns" containerID="cri-o://b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f" gracePeriod=10 Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.716583 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.751035 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-k2nl7"] Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.752394 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.763861 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bsl9\" (UniqueName: \"kubernetes.io/projected/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-kube-api-access-7bsl9\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.763938 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.763996 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.764031 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-scripts\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.764064 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.764093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.765561 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-k2nl7"] Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.868859 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-scripts\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869575 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6t5\" (UniqueName: \"kubernetes.io/projected/495c2732-0847-4e56-a609-1a24244a4969-kube-api-access-zt6t5\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869646 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-dns-svc\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869708 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869731 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869787 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bsl9\" (UniqueName: \"kubernetes.io/projected/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-kube-api-access-7bsl9\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869827 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869904 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-config\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.869923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.870805 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.875714 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.875889 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-scripts\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.876528 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.894885 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bsl9\" (UniqueName: \"kubernetes.io/projected/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-kube-api-access-7bsl9\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.898969 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data\") pod \"cinder-scheduler-0\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.950435 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.951873 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.953190 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.955221 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.965161 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.971652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.972143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-config\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.972403 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6t5\" (UniqueName: \"kubernetes.io/projected/495c2732-0847-4e56-a609-1a24244a4969-kube-api-access-zt6t5\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.972667 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-dns-svc\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.972793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.973093 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.973283 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-config\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.974496 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:30 crc kubenswrapper[4820]: I0201 14:38:30.974629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-dns-svc\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.017474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6t5\" (UniqueName: \"kubernetes.io/projected/495c2732-0847-4e56-a609-1a24244a4969-kube-api-access-zt6t5\") pod \"dnsmasq-dns-58db5546cc-k2nl7\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.074945 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-scripts\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.074993 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.075016 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs4tg\" (UniqueName: \"kubernetes.io/projected/08a88342-a726-4369-bd10-9c9b303be658-kube-api-access-gs4tg\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.075047 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a88342-a726-4369-bd10-9c9b303be658-logs\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.075071 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.075102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data-custom\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.075165 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08a88342-a726-4369-bd10-9c9b303be658-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.113324 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.188762 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-scripts\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.189060 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.189090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs4tg\" (UniqueName: \"kubernetes.io/projected/08a88342-a726-4369-bd10-9c9b303be658-kube-api-access-gs4tg\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.189150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a88342-a726-4369-bd10-9c9b303be658-logs\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.189197 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.189254 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data-custom\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.189424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08a88342-a726-4369-bd10-9c9b303be658-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.189455 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08a88342-a726-4369-bd10-9c9b303be658-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.192853 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a88342-a726-4369-bd10-9c9b303be658-logs\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.209659 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-scripts\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.212420 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data-custom\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.213833 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.214286 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.217857 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs4tg\" (UniqueName: \"kubernetes.io/projected/08a88342-a726-4369-bd10-9c9b303be658-kube-api-access-gs4tg\") pod \"cinder-api-0\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.259488 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.297375 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.297932 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-dns-svc\") pod \"e8f43c3c-32c5-4636-bbc7-21abde903f86\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.297997 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-config\") pod \"e8f43c3c-32c5-4636-bbc7-21abde903f86\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.298050 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-sb\") pod \"e8f43c3c-32c5-4636-bbc7-21abde903f86\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.298127 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvlq8\" (UniqueName: \"kubernetes.io/projected/e8f43c3c-32c5-4636-bbc7-21abde903f86-kube-api-access-bvlq8\") pod \"e8f43c3c-32c5-4636-bbc7-21abde903f86\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.298178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-nb\") pod \"e8f43c3c-32c5-4636-bbc7-21abde903f86\" (UID: \"e8f43c3c-32c5-4636-bbc7-21abde903f86\") " Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.304941 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f43c3c-32c5-4636-bbc7-21abde903f86-kube-api-access-bvlq8" (OuterVolumeSpecName: "kube-api-access-bvlq8") pod "e8f43c3c-32c5-4636-bbc7-21abde903f86" (UID: "e8f43c3c-32c5-4636-bbc7-21abde903f86"). InnerVolumeSpecName "kube-api-access-bvlq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.344765 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-wl585" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.344611 4820 generic.go:334] "Generic (PLEG): container finished" podID="e8f43c3c-32c5-4636-bbc7-21abde903f86" containerID="b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f" exitCode=0 Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.344815 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-wl585" event={"ID":"e8f43c3c-32c5-4636-bbc7-21abde903f86","Type":"ContainerDied","Data":"b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f"} Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.344848 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-wl585" event={"ID":"e8f43c3c-32c5-4636-bbc7-21abde903f86","Type":"ContainerDied","Data":"fac67dd949ff9f160d1270ea0476cfd47ccef46b61789de9324ea8e2c6963718"} Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.344867 4820 scope.go:117] "RemoveContainer" containerID="b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.354712 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8f43c3c-32c5-4636-bbc7-21abde903f86" (UID: "e8f43c3c-32c5-4636-bbc7-21abde903f86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.380985 4820 scope.go:117] "RemoveContainer" containerID="da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.392024 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-config" (OuterVolumeSpecName: "config") pod "e8f43c3c-32c5-4636-bbc7-21abde903f86" (UID: "e8f43c3c-32c5-4636-bbc7-21abde903f86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.393084 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8f43c3c-32c5-4636-bbc7-21abde903f86" (UID: "e8f43c3c-32c5-4636-bbc7-21abde903f86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.396647 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8f43c3c-32c5-4636-bbc7-21abde903f86" (UID: "e8f43c3c-32c5-4636-bbc7-21abde903f86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.399859 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.399922 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.399939 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.399955 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvlq8\" (UniqueName: \"kubernetes.io/projected/e8f43c3c-32c5-4636-bbc7-21abde903f86-kube-api-access-bvlq8\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.399967 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f43c3c-32c5-4636-bbc7-21abde903f86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.407747 4820 scope.go:117] "RemoveContainer" containerID="b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f" Feb 01 14:38:31 crc kubenswrapper[4820]: E0201 14:38:31.411026 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f\": container with ID starting with b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f not found: ID does not exist" containerID="b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.411072 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f"} err="failed to get container status \"b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f\": rpc error: code = NotFound desc = could not find container \"b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f\": container with ID starting with b4df40e7ca6062046e31fc331cbf04f67f871ae77b5e3931cc600f70b1867a9f not found: ID does not exist" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.411100 4820 scope.go:117] "RemoveContainer" containerID="da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b" Feb 01 14:38:31 crc kubenswrapper[4820]: E0201 14:38:31.411479 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b\": container with ID starting with da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b not found: ID does not exist" containerID="da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.411535 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b"} err="failed to get container status \"da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b\": rpc error: code = NotFound desc = could not find container \"da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b\": container with ID starting with da7f111b7d2f3e3149a23c3f9bf8ad2d90e4de18350430b8d9def1e1d9506c3b not found: ID does not exist" Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.518864 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:31 crc kubenswrapper[4820]: W0201 14:38:31.520251 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0b99ee6_faeb_4d47_a7ff_7bc8240ef69a.slice/crio-ef6a29da8000f09c3316c1c7c530d3f650cbdb27d96c53520a8ecb257691fd93 WatchSource:0}: Error finding container ef6a29da8000f09c3316c1c7c530d3f650cbdb27d96c53520a8ecb257691fd93: Status 404 returned error can't find the container with id ef6a29da8000f09c3316c1c7c530d3f650cbdb27d96c53520a8ecb257691fd93 Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.644138 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-k2nl7"] Feb 01 14:38:31 crc kubenswrapper[4820]: W0201 14:38:31.660046 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod495c2732_0847_4e56_a609_1a24244a4969.slice/crio-f158d262e4cea0bdca725da6e98276bde2f477fcb9a80faa95e073d7c1259a02 WatchSource:0}: Error finding container f158d262e4cea0bdca725da6e98276bde2f477fcb9a80faa95e073d7c1259a02: Status 404 returned error can't find the container with id f158d262e4cea0bdca725da6e98276bde2f477fcb9a80faa95e073d7c1259a02 Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.691382 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-wl585"] Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.703303 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-wl585"] Feb 01 14:38:31 crc kubenswrapper[4820]: I0201 14:38:31.816354 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:31 crc kubenswrapper[4820]: W0201 14:38:31.820267 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a88342_a726_4369_bd10_9c9b303be658.slice/crio-a72519f390ee2f277d6a5d9a387d1e1a7fa75912c810542ee9d4e4679794acf9 WatchSource:0}: Error finding container a72519f390ee2f277d6a5d9a387d1e1a7fa75912c810542ee9d4e4679794acf9: Status 404 returned error can't find the container with id a72519f390ee2f277d6a5d9a387d1e1a7fa75912c810542ee9d4e4679794acf9 Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.078753 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-746455c778-hw955" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": read tcp 10.217.0.2:43210->10.217.0.145:9311: read: connection reset by peer" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.079379 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-746455c778-hw955" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": read tcp 10.217.0.2:43214->10.217.0.145:9311: read: connection reset by peer" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.394999 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerStarted","Data":"c6842f5e67625d5251d25809ecf1b20983c7facbe0923c3d98f70cb072f7a747"} Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.396022 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.420557 4820 generic.go:334] "Generic (PLEG): container finished" podID="495c2732-0847-4e56-a609-1a24244a4969" containerID="3140d85919217657478a7a1bb0b5275959e9b9169e07ab1d943a0747b8af12cf" exitCode=0 Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.420644 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" event={"ID":"495c2732-0847-4e56-a609-1a24244a4969","Type":"ContainerDied","Data":"3140d85919217657478a7a1bb0b5275959e9b9169e07ab1d943a0747b8af12cf"} Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.420668 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" event={"ID":"495c2732-0847-4e56-a609-1a24244a4969","Type":"ContainerStarted","Data":"f158d262e4cea0bdca725da6e98276bde2f477fcb9a80faa95e073d7c1259a02"} Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.429736 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.174877455 podStartE2EDuration="6.429720437s" podCreationTimestamp="2026-02-01 14:38:26 +0000 UTC" firstStartedPulling="2026-02-01 14:38:27.338570858 +0000 UTC m=+1048.858937142" lastFinishedPulling="2026-02-01 14:38:31.59341384 +0000 UTC m=+1053.113780124" observedRunningTime="2026-02-01 14:38:32.418087674 +0000 UTC m=+1053.938453958" watchObservedRunningTime="2026-02-01 14:38:32.429720437 +0000 UTC m=+1053.950086721" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.437003 4820 generic.go:334] "Generic (PLEG): container finished" podID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerID="0ebce16cff1333d3d6dca89e21978509424e41e1fb668ab1353676d936244ba1" exitCode=0 Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.437084 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-746455c778-hw955" event={"ID":"d446e958-fb74-4146-9dbd-ec4720536b9e","Type":"ContainerDied","Data":"0ebce16cff1333d3d6dca89e21978509424e41e1fb668ab1353676d936244ba1"} Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.447684 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08a88342-a726-4369-bd10-9c9b303be658","Type":"ContainerStarted","Data":"a72519f390ee2f277d6a5d9a387d1e1a7fa75912c810542ee9d4e4679794acf9"} Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.526749 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a","Type":"ContainerStarted","Data":"ef6a29da8000f09c3316c1c7c530d3f650cbdb27d96c53520a8ecb257691fd93"} Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.722215 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.748618 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d446e958-fb74-4146-9dbd-ec4720536b9e-logs\") pod \"d446e958-fb74-4146-9dbd-ec4720536b9e\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.749052 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-combined-ca-bundle\") pod \"d446e958-fb74-4146-9dbd-ec4720536b9e\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.749081 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data-custom\") pod \"d446e958-fb74-4146-9dbd-ec4720536b9e\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.749142 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kvgg\" (UniqueName: \"kubernetes.io/projected/d446e958-fb74-4146-9dbd-ec4720536b9e-kube-api-access-6kvgg\") pod \"d446e958-fb74-4146-9dbd-ec4720536b9e\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.749223 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data\") pod \"d446e958-fb74-4146-9dbd-ec4720536b9e\" (UID: \"d446e958-fb74-4146-9dbd-ec4720536b9e\") " Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.753495 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d446e958-fb74-4146-9dbd-ec4720536b9e-logs" (OuterVolumeSpecName: "logs") pod "d446e958-fb74-4146-9dbd-ec4720536b9e" (UID: "d446e958-fb74-4146-9dbd-ec4720536b9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.759370 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d446e958-fb74-4146-9dbd-ec4720536b9e-kube-api-access-6kvgg" (OuterVolumeSpecName: "kube-api-access-6kvgg") pod "d446e958-fb74-4146-9dbd-ec4720536b9e" (UID: "d446e958-fb74-4146-9dbd-ec4720536b9e"). InnerVolumeSpecName "kube-api-access-6kvgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.762075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d446e958-fb74-4146-9dbd-ec4720536b9e" (UID: "d446e958-fb74-4146-9dbd-ec4720536b9e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.802996 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d446e958-fb74-4146-9dbd-ec4720536b9e" (UID: "d446e958-fb74-4146-9dbd-ec4720536b9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.846968 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data" (OuterVolumeSpecName: "config-data") pod "d446e958-fb74-4146-9dbd-ec4720536b9e" (UID: "d446e958-fb74-4146-9dbd-ec4720536b9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.851653 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.851682 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.851696 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kvgg\" (UniqueName: \"kubernetes.io/projected/d446e958-fb74-4146-9dbd-ec4720536b9e-kube-api-access-6kvgg\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.851707 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d446e958-fb74-4146-9dbd-ec4720536b9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.851717 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d446e958-fb74-4146-9dbd-ec4720536b9e-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:32 crc kubenswrapper[4820]: I0201 14:38:32.989294 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.225722 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f43c3c-32c5-4636-bbc7-21abde903f86" path="/var/lib/kubelet/pods/e8f43c3c-32c5-4636-bbc7-21abde903f86/volumes" Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.538783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" event={"ID":"495c2732-0847-4e56-a609-1a24244a4969","Type":"ContainerStarted","Data":"cab89cf77f7942aab7c3d7b2fcc3067eb40dfd940b9c01e9d394c5d223cce512"} Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.539952 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.548555 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-746455c778-hw955" event={"ID":"d446e958-fb74-4146-9dbd-ec4720536b9e","Type":"ContainerDied","Data":"8019b212fd5740b9454b6057cb1033c8f35b7a164043091292f9c011111f788e"} Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.548611 4820 scope.go:117] "RemoveContainer" containerID="0ebce16cff1333d3d6dca89e21978509424e41e1fb668ab1353676d936244ba1" Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.548642 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-746455c778-hw955" Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.550413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08a88342-a726-4369-bd10-9c9b303be658","Type":"ContainerStarted","Data":"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6"} Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.553317 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a","Type":"ContainerStarted","Data":"0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c"} Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.561334 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" podStartSLOduration=3.561317016 podStartE2EDuration="3.561317016s" podCreationTimestamp="2026-02-01 14:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:33.556348855 +0000 UTC m=+1055.076715139" watchObservedRunningTime="2026-02-01 14:38:33.561317016 +0000 UTC m=+1055.081683300" Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.579946 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-746455c778-hw955"] Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.582303 4820 scope.go:117] "RemoveContainer" containerID="c44f12df7794ddd18b9bb72de290241b6725a1dfe070dfcbd9075486023955af" Feb 01 14:38:33 crc kubenswrapper[4820]: I0201 14:38:33.585644 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-746455c778-hw955"] Feb 01 14:38:34 crc kubenswrapper[4820]: I0201 14:38:34.561596 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08a88342-a726-4369-bd10-9c9b303be658","Type":"ContainerStarted","Data":"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4"} Feb 01 14:38:34 crc kubenswrapper[4820]: I0201 14:38:34.561946 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 01 14:38:34 crc kubenswrapper[4820]: I0201 14:38:34.561731 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="08a88342-a726-4369-bd10-9c9b303be658" containerName="cinder-api" containerID="cri-o://778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4" gracePeriod=30 Feb 01 14:38:34 crc kubenswrapper[4820]: I0201 14:38:34.561662 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="08a88342-a726-4369-bd10-9c9b303be658" containerName="cinder-api-log" containerID="cri-o://0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6" gracePeriod=30 Feb 01 14:38:34 crc kubenswrapper[4820]: I0201 14:38:34.566818 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a","Type":"ContainerStarted","Data":"6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed"} Feb 01 14:38:34 crc kubenswrapper[4820]: I0201 14:38:34.581527 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.58148166 podStartE2EDuration="4.58148166s" podCreationTimestamp="2026-02-01 14:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:34.580211549 +0000 UTC m=+1056.100577833" watchObservedRunningTime="2026-02-01 14:38:34.58148166 +0000 UTC m=+1056.101847944" Feb 01 14:38:34 crc kubenswrapper[4820]: I0201 14:38:34.613467 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.842451102 podStartE2EDuration="4.613449359s" podCreationTimestamp="2026-02-01 14:38:30 +0000 UTC" firstStartedPulling="2026-02-01 14:38:31.522450671 +0000 UTC m=+1053.042816955" lastFinishedPulling="2026-02-01 14:38:32.293448928 +0000 UTC m=+1053.813815212" observedRunningTime="2026-02-01 14:38:34.60484467 +0000 UTC m=+1056.125210954" watchObservedRunningTime="2026-02-01 14:38:34.613449359 +0000 UTC m=+1056.133815643" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.161181 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.205254 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data\") pod \"08a88342-a726-4369-bd10-9c9b303be658\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.205308 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-scripts\") pod \"08a88342-a726-4369-bd10-9c9b303be658\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.205339 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data-custom\") pod \"08a88342-a726-4369-bd10-9c9b303be658\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.205372 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-combined-ca-bundle\") pod \"08a88342-a726-4369-bd10-9c9b303be658\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.205402 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08a88342-a726-4369-bd10-9c9b303be658-etc-machine-id\") pod \"08a88342-a726-4369-bd10-9c9b303be658\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.205426 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs4tg\" (UniqueName: \"kubernetes.io/projected/08a88342-a726-4369-bd10-9c9b303be658-kube-api-access-gs4tg\") pod \"08a88342-a726-4369-bd10-9c9b303be658\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.205514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a88342-a726-4369-bd10-9c9b303be658-logs\") pod \"08a88342-a726-4369-bd10-9c9b303be658\" (UID: \"08a88342-a726-4369-bd10-9c9b303be658\") " Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.206016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08a88342-a726-4369-bd10-9c9b303be658-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "08a88342-a726-4369-bd10-9c9b303be658" (UID: "08a88342-a726-4369-bd10-9c9b303be658"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.206253 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a88342-a726-4369-bd10-9c9b303be658-logs" (OuterVolumeSpecName: "logs") pod "08a88342-a726-4369-bd10-9c9b303be658" (UID: "08a88342-a726-4369-bd10-9c9b303be658"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.211055 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-scripts" (OuterVolumeSpecName: "scripts") pod "08a88342-a726-4369-bd10-9c9b303be658" (UID: "08a88342-a726-4369-bd10-9c9b303be658"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.213082 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" path="/var/lib/kubelet/pods/d446e958-fb74-4146-9dbd-ec4720536b9e/volumes" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.219018 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a88342-a726-4369-bd10-9c9b303be658-kube-api-access-gs4tg" (OuterVolumeSpecName: "kube-api-access-gs4tg") pod "08a88342-a726-4369-bd10-9c9b303be658" (UID: "08a88342-a726-4369-bd10-9c9b303be658"). InnerVolumeSpecName "kube-api-access-gs4tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.231059 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08a88342-a726-4369-bd10-9c9b303be658" (UID: "08a88342-a726-4369-bd10-9c9b303be658"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.238999 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08a88342-a726-4369-bd10-9c9b303be658" (UID: "08a88342-a726-4369-bd10-9c9b303be658"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.274632 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data" (OuterVolumeSpecName: "config-data") pod "08a88342-a726-4369-bd10-9c9b303be658" (UID: "08a88342-a726-4369-bd10-9c9b303be658"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.307564 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.307603 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08a88342-a726-4369-bd10-9c9b303be658-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.307617 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs4tg\" (UniqueName: \"kubernetes.io/projected/08a88342-a726-4369-bd10-9c9b303be658-kube-api-access-gs4tg\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.307630 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a88342-a726-4369-bd10-9c9b303be658-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.307641 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.307654 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.307665 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08a88342-a726-4369-bd10-9c9b303be658-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.587346 4820 generic.go:334] "Generic (PLEG): container finished" podID="08a88342-a726-4369-bd10-9c9b303be658" containerID="778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4" exitCode=0 Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.587400 4820 generic.go:334] "Generic (PLEG): container finished" podID="08a88342-a726-4369-bd10-9c9b303be658" containerID="0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6" exitCode=143 Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.588680 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.588856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08a88342-a726-4369-bd10-9c9b303be658","Type":"ContainerDied","Data":"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4"} Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.588916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08a88342-a726-4369-bd10-9c9b303be658","Type":"ContainerDied","Data":"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6"} Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.588932 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08a88342-a726-4369-bd10-9c9b303be658","Type":"ContainerDied","Data":"a72519f390ee2f277d6a5d9a387d1e1a7fa75912c810542ee9d4e4679794acf9"} Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.589004 4820 scope.go:117] "RemoveContainer" containerID="778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.630327 4820 scope.go:117] "RemoveContainer" containerID="0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.630496 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.634809 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.678859 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:35 crc kubenswrapper[4820]: E0201 14:38:35.679212 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679225 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api" Feb 01 14:38:35 crc kubenswrapper[4820]: E0201 14:38:35.679238 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f43c3c-32c5-4636-bbc7-21abde903f86" containerName="dnsmasq-dns" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679244 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f43c3c-32c5-4636-bbc7-21abde903f86" containerName="dnsmasq-dns" Feb 01 14:38:35 crc kubenswrapper[4820]: E0201 14:38:35.679262 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a88342-a726-4369-bd10-9c9b303be658" containerName="cinder-api" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679269 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a88342-a726-4369-bd10-9c9b303be658" containerName="cinder-api" Feb 01 14:38:35 crc kubenswrapper[4820]: E0201 14:38:35.679284 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api-log" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679290 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api-log" Feb 01 14:38:35 crc kubenswrapper[4820]: E0201 14:38:35.679304 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f43c3c-32c5-4636-bbc7-21abde903f86" containerName="init" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679311 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f43c3c-32c5-4636-bbc7-21abde903f86" containerName="init" Feb 01 14:38:35 crc kubenswrapper[4820]: E0201 14:38:35.679326 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a88342-a726-4369-bd10-9c9b303be658" containerName="cinder-api-log" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679334 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a88342-a726-4369-bd10-9c9b303be658" containerName="cinder-api-log" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679524 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a88342-a726-4369-bd10-9c9b303be658" containerName="cinder-api-log" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679547 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api-log" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679557 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a88342-a726-4369-bd10-9c9b303be658" containerName="cinder-api" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679565 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f43c3c-32c5-4636-bbc7-21abde903f86" containerName="dnsmasq-dns" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.679573 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d446e958-fb74-4146-9dbd-ec4720536b9e" containerName="barbican-api" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.680414 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.680495 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.688510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.688786 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.692464 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714103 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-scripts\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714261 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-config-data-custom\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714301 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71d67552-8350-45f8-8657-61363724da90-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714348 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-config-data\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714408 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrcd\" (UniqueName: \"kubernetes.io/projected/71d67552-8350-45f8-8657-61363724da90-kube-api-access-jnrcd\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714433 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714455 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d67552-8350-45f8-8657-61363724da90-logs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.714481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.725293 4820 scope.go:117] "RemoveContainer" containerID="778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4" Feb 01 14:38:35 crc kubenswrapper[4820]: E0201 14:38:35.726751 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4\": container with ID starting with 778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4 not found: ID does not exist" containerID="778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.726780 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4"} err="failed to get container status \"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4\": rpc error: code = NotFound desc = could not find container \"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4\": container with ID starting with 778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4 not found: ID does not exist" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.726801 4820 scope.go:117] "RemoveContainer" containerID="0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6" Feb 01 14:38:35 crc kubenswrapper[4820]: E0201 14:38:35.727074 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6\": container with ID starting with 0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6 not found: ID does not exist" containerID="0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.727112 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6"} err="failed to get container status \"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6\": rpc error: code = NotFound desc = could not find container \"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6\": container with ID starting with 0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6 not found: ID does not exist" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.727141 4820 scope.go:117] "RemoveContainer" containerID="778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.727383 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4"} err="failed to get container status \"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4\": rpc error: code = NotFound desc = could not find container \"778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4\": container with ID starting with 778a52ce1f2f242fc2188bc73bd955ea43b228df62d61700b9cf100cd20a46f4 not found: ID does not exist" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.727399 4820 scope.go:117] "RemoveContainer" containerID="0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.727609 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6"} err="failed to get container status \"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6\": rpc error: code = NotFound desc = could not find container \"0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6\": container with ID starting with 0fa37345bd199b20c7abf7d3459c4002e06eabbe0c4b9c91b85eb28c5b9113c6 not found: ID does not exist" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.815909 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-config-data-custom\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.815971 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71d67552-8350-45f8-8657-61363724da90-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.816043 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-config-data\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.816094 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrcd\" (UniqueName: \"kubernetes.io/projected/71d67552-8350-45f8-8657-61363724da90-kube-api-access-jnrcd\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.816121 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.816140 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d67552-8350-45f8-8657-61363724da90-logs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.816162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.816192 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-scripts\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.816209 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.817094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71d67552-8350-45f8-8657-61363724da90-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.817596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d67552-8350-45f8-8657-61363724da90-logs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.821465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.821495 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-scripts\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.821690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-config-data\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.822054 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.822782 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-config-data-custom\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.833422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71d67552-8350-45f8-8657-61363724da90-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.844586 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnrcd\" (UniqueName: \"kubernetes.io/projected/71d67552-8350-45f8-8657-61363724da90-kube-api-access-jnrcd\") pod \"cinder-api-0\" (UID: \"71d67552-8350-45f8-8657-61363724da90\") " pod="openstack/cinder-api-0" Feb 01 14:38:35 crc kubenswrapper[4820]: I0201 14:38:35.950683 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 01 14:38:36 crc kubenswrapper[4820]: I0201 14:38:36.004414 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 01 14:38:36 crc kubenswrapper[4820]: I0201 14:38:36.444954 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 01 14:38:36 crc kubenswrapper[4820]: I0201 14:38:36.606309 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71d67552-8350-45f8-8657-61363724da90","Type":"ContainerStarted","Data":"c3fc424c07d903112ea2075fbeb705be43179deb2a16b0d9e5d1d2e4a9cb4d09"} Feb 01 14:38:37 crc kubenswrapper[4820]: I0201 14:38:37.209338 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a88342-a726-4369-bd10-9c9b303be658" path="/var/lib/kubelet/pods/08a88342-a726-4369-bd10-9c9b303be658/volumes" Feb 01 14:38:37 crc kubenswrapper[4820]: I0201 14:38:37.634850 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71d67552-8350-45f8-8657-61363724da90","Type":"ContainerStarted","Data":"9aecd65fcf9a4cbdaad379abb9ba236c518812a9d1066276d989a6356f4aade0"} Feb 01 14:38:37 crc kubenswrapper[4820]: I0201 14:38:37.634910 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71d67552-8350-45f8-8657-61363724da90","Type":"ContainerStarted","Data":"89efe6035f1f46674fffcbd4d6bad90722d27f596bed189dc4498bf671d5f4de"} Feb 01 14:38:37 crc kubenswrapper[4820]: I0201 14:38:37.635154 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 01 14:38:37 crc kubenswrapper[4820]: I0201 14:38:37.665136 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.6651118990000002 podStartE2EDuration="2.665111899s" podCreationTimestamp="2026-02-01 14:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:37.656420256 +0000 UTC m=+1059.176786560" watchObservedRunningTime="2026-02-01 14:38:37.665111899 +0000 UTC m=+1059.185478183" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.115172 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.186676 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl"] Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.187098 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" podUID="dda189ec-fa75-4457-ad6b-1b74df127b0c" containerName="dnsmasq-dns" containerID="cri-o://eb41abd1ad58d87c9f4b0faf91fd472dc995b0d23fb18cfb12e25e74d82b128d" gracePeriod=10 Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.198884 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.256510 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.676663 4820 generic.go:334] "Generic (PLEG): container finished" podID="dda189ec-fa75-4457-ad6b-1b74df127b0c" containerID="eb41abd1ad58d87c9f4b0faf91fd472dc995b0d23fb18cfb12e25e74d82b128d" exitCode=0 Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.676790 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" event={"ID":"dda189ec-fa75-4457-ad6b-1b74df127b0c","Type":"ContainerDied","Data":"eb41abd1ad58d87c9f4b0faf91fd472dc995b0d23fb18cfb12e25e74d82b128d"} Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.677092 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" event={"ID":"dda189ec-fa75-4457-ad6b-1b74df127b0c","Type":"ContainerDied","Data":"4616ea588a3e30a099dbec2af33ee0a3443a65b66af79332d0a0fd8ec9d51dd5"} Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.677117 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4616ea588a3e30a099dbec2af33ee0a3443a65b66af79332d0a0fd8ec9d51dd5" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.677200 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerName="cinder-scheduler" containerID="cri-o://0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c" gracePeriod=30 Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.677277 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerName="probe" containerID="cri-o://6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed" gracePeriod=30 Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.711335 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.736443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnmxk\" (UniqueName: \"kubernetes.io/projected/dda189ec-fa75-4457-ad6b-1b74df127b0c-kube-api-access-mnmxk\") pod \"dda189ec-fa75-4457-ad6b-1b74df127b0c\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.736577 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-config\") pod \"dda189ec-fa75-4457-ad6b-1b74df127b0c\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.736615 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-sb\") pod \"dda189ec-fa75-4457-ad6b-1b74df127b0c\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.736637 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-nb\") pod \"dda189ec-fa75-4457-ad6b-1b74df127b0c\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.736702 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-dns-svc\") pod \"dda189ec-fa75-4457-ad6b-1b74df127b0c\" (UID: \"dda189ec-fa75-4457-ad6b-1b74df127b0c\") " Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.745156 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda189ec-fa75-4457-ad6b-1b74df127b0c-kube-api-access-mnmxk" (OuterVolumeSpecName: "kube-api-access-mnmxk") pod "dda189ec-fa75-4457-ad6b-1b74df127b0c" (UID: "dda189ec-fa75-4457-ad6b-1b74df127b0c"). InnerVolumeSpecName "kube-api-access-mnmxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.789650 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dda189ec-fa75-4457-ad6b-1b74df127b0c" (UID: "dda189ec-fa75-4457-ad6b-1b74df127b0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.800166 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dda189ec-fa75-4457-ad6b-1b74df127b0c" (UID: "dda189ec-fa75-4457-ad6b-1b74df127b0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.802986 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dda189ec-fa75-4457-ad6b-1b74df127b0c" (UID: "dda189ec-fa75-4457-ad6b-1b74df127b0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.810626 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-config" (OuterVolumeSpecName: "config") pod "dda189ec-fa75-4457-ad6b-1b74df127b0c" (UID: "dda189ec-fa75-4457-ad6b-1b74df127b0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.839955 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.839989 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.839999 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.840007 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dda189ec-fa75-4457-ad6b-1b74df127b0c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:41 crc kubenswrapper[4820]: I0201 14:38:41.840016 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnmxk\" (UniqueName: \"kubernetes.io/projected/dda189ec-fa75-4457-ad6b-1b74df127b0c-kube-api-access-mnmxk\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:42 crc kubenswrapper[4820]: I0201 14:38:42.687539 4820 generic.go:334] "Generic (PLEG): container finished" podID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerID="6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed" exitCode=0 Feb 01 14:38:42 crc kubenswrapper[4820]: I0201 14:38:42.687624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a","Type":"ContainerDied","Data":"6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed"} Feb 01 14:38:42 crc kubenswrapper[4820]: I0201 14:38:42.688019 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl" Feb 01 14:38:42 crc kubenswrapper[4820]: I0201 14:38:42.719415 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl"] Feb 01 14:38:42 crc kubenswrapper[4820]: I0201 14:38:42.731101 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-jn6xl"] Feb 01 14:38:43 crc kubenswrapper[4820]: I0201 14:38:43.221538 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda189ec-fa75-4457-ad6b-1b74df127b0c" path="/var/lib/kubelet/pods/dda189ec-fa75-4457-ad6b-1b74df127b0c/volumes" Feb 01 14:38:43 crc kubenswrapper[4820]: I0201 14:38:43.416700 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:43 crc kubenswrapper[4820]: I0201 14:38:43.418321 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:43 crc kubenswrapper[4820]: I0201 14:38:43.982111 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bb96c9945-brtg8" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.663063 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.686696 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data\") pod \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.687857 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-scripts\") pod \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.688042 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-etc-machine-id\") pod \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.688139 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-combined-ca-bundle\") pod \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.688321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bsl9\" (UniqueName: \"kubernetes.io/projected/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-kube-api-access-7bsl9\") pod \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.688398 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data-custom\") pod \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\" (UID: \"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a\") " Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.689469 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" (UID: "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.696500 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" (UID: "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.696682 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-scripts" (OuterVolumeSpecName: "scripts") pod "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" (UID: "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.698034 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-kube-api-access-7bsl9" (OuterVolumeSpecName: "kube-api-access-7bsl9") pod "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" (UID: "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a"). InnerVolumeSpecName "kube-api-access-7bsl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.714110 4820 generic.go:334] "Generic (PLEG): container finished" podID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerID="0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c" exitCode=0 Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.714157 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a","Type":"ContainerDied","Data":"0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c"} Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.714184 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a","Type":"ContainerDied","Data":"ef6a29da8000f09c3316c1c7c530d3f650cbdb27d96c53520a8ecb257691fd93"} Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.714199 4820 scope.go:117] "RemoveContainer" containerID="6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.714334 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.749061 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" (UID: "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.773800 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data" (OuterVolumeSpecName: "config-data") pod "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" (UID: "e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.790350 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.790380 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.790390 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.790430 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.790439 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bsl9\" (UniqueName: \"kubernetes.io/projected/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-kube-api-access-7bsl9\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.790448 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.795736 4820 scope.go:117] "RemoveContainer" containerID="0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.813707 4820 scope.go:117] "RemoveContainer" containerID="6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed" Feb 01 14:38:44 crc kubenswrapper[4820]: E0201 14:38:44.814166 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed\": container with ID starting with 6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed not found: ID does not exist" containerID="6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.814202 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed"} err="failed to get container status \"6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed\": rpc error: code = NotFound desc = could not find container \"6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed\": container with ID starting with 6d3d8f636ab4d2c21449670689af0736ec5b57ab70d0941bdf3c0122270bb1ed not found: ID does not exist" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.814222 4820 scope.go:117] "RemoveContainer" containerID="0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c" Feb 01 14:38:44 crc kubenswrapper[4820]: E0201 14:38:44.814591 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c\": container with ID starting with 0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c not found: ID does not exist" containerID="0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c" Feb 01 14:38:44 crc kubenswrapper[4820]: I0201 14:38:44.814621 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c"} err="failed to get container status \"0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c\": rpc error: code = NotFound desc = could not find container \"0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c\": container with ID starting with 0879f5156021494bc18c072b949ebfc42619e96cc8c509acb7a4735b861f220c not found: ID does not exist" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.052683 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.061555 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.081739 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:45 crc kubenswrapper[4820]: E0201 14:38:45.082081 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda189ec-fa75-4457-ad6b-1b74df127b0c" containerName="init" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.082097 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda189ec-fa75-4457-ad6b-1b74df127b0c" containerName="init" Feb 01 14:38:45 crc kubenswrapper[4820]: E0201 14:38:45.082109 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerName="cinder-scheduler" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.082116 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerName="cinder-scheduler" Feb 01 14:38:45 crc kubenswrapper[4820]: E0201 14:38:45.082127 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerName="probe" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.082135 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerName="probe" Feb 01 14:38:45 crc kubenswrapper[4820]: E0201 14:38:45.082149 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda189ec-fa75-4457-ad6b-1b74df127b0c" containerName="dnsmasq-dns" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.082154 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda189ec-fa75-4457-ad6b-1b74df127b0c" containerName="dnsmasq-dns" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.082315 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerName="probe" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.082327 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" containerName="cinder-scheduler" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.082343 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda189ec-fa75-4457-ad6b-1b74df127b0c" containerName="dnsmasq-dns" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.083282 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.086295 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.116470 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.196492 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzw6\" (UniqueName: \"kubernetes.io/projected/03581fce-bf82-4dd7-8ebe-37e338ba81dc-kube-api-access-bnzw6\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.196691 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03581fce-bf82-4dd7-8ebe-37e338ba81dc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.196809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-scripts\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.196948 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.197019 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-config-data\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.197098 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.211067 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a" path="/var/lib/kubelet/pods/e0b99ee6-faeb-4d47-a7ff-7bc8240ef69a/volumes" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.298856 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-scripts\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.299681 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.299812 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-config-data\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.299990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.300572 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzw6\" (UniqueName: \"kubernetes.io/projected/03581fce-bf82-4dd7-8ebe-37e338ba81dc-kube-api-access-bnzw6\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.300732 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03581fce-bf82-4dd7-8ebe-37e338ba81dc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.300916 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03581fce-bf82-4dd7-8ebe-37e338ba81dc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.304213 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-scripts\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.310349 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.310559 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.311137 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03581fce-bf82-4dd7-8ebe-37e338ba81dc-config-data\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.317789 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzw6\" (UniqueName: \"kubernetes.io/projected/03581fce-bf82-4dd7-8ebe-37e338ba81dc-kube-api-access-bnzw6\") pod \"cinder-scheduler-0\" (UID: \"03581fce-bf82-4dd7-8ebe-37e338ba81dc\") " pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.399988 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.845478 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 01 14:38:45 crc kubenswrapper[4820]: W0201 14:38:45.846924 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03581fce_bf82_4dd7_8ebe_37e338ba81dc.slice/crio-feb7577c78cd49449c11e9e9091a1a7d798453377b883bee31ff1e65e4db707d WatchSource:0}: Error finding container feb7577c78cd49449c11e9e9091a1a7d798453377b883bee31ff1e65e4db707d: Status 404 returned error can't find the container with id feb7577c78cd49449c11e9e9091a1a7d798453377b883bee31ff1e65e4db707d Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.984931 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.986234 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.989243 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-c6x5b" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.991863 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 01 14:38:45 crc kubenswrapper[4820]: I0201 14:38:45.993977 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.016468 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2fp\" (UniqueName: \"kubernetes.io/projected/06eafee6-9b74-4559-ba89-633ab4f4f036-kube-api-access-9k2fp\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.016533 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06eafee6-9b74-4559-ba89-633ab4f4f036-openstack-config-secret\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.016583 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06eafee6-9b74-4559-ba89-633ab4f4f036-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.016604 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06eafee6-9b74-4559-ba89-633ab4f4f036-openstack-config\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.018243 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.118612 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06eafee6-9b74-4559-ba89-633ab4f4f036-openstack-config-secret\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.118661 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06eafee6-9b74-4559-ba89-633ab4f4f036-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.118687 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06eafee6-9b74-4559-ba89-633ab4f4f036-openstack-config\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.118800 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k2fp\" (UniqueName: \"kubernetes.io/projected/06eafee6-9b74-4559-ba89-633ab4f4f036-kube-api-access-9k2fp\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.120258 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06eafee6-9b74-4559-ba89-633ab4f4f036-openstack-config\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.124104 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06eafee6-9b74-4559-ba89-633ab4f4f036-openstack-config-secret\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.129495 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06eafee6-9b74-4559-ba89-633ab4f4f036-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.137665 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k2fp\" (UniqueName: \"kubernetes.io/projected/06eafee6-9b74-4559-ba89-633ab4f4f036-kube-api-access-9k2fp\") pod \"openstackclient\" (UID: \"06eafee6-9b74-4559-ba89-633ab4f4f036\") " pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.319300 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.734242 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03581fce-bf82-4dd7-8ebe-37e338ba81dc","Type":"ContainerStarted","Data":"8fdd96ed88fcdcf337f7d2a77ef28e2b1f6291f2fcefcdb19fc0589f469a057a"} Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.734756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03581fce-bf82-4dd7-8ebe-37e338ba81dc","Type":"ContainerStarted","Data":"feb7577c78cd49449c11e9e9091a1a7d798453377b883bee31ff1e65e4db707d"} Feb 01 14:38:46 crc kubenswrapper[4820]: I0201 14:38:46.779866 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 01 14:38:46 crc kubenswrapper[4820]: W0201 14:38:46.793781 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06eafee6_9b74_4559_ba89_633ab4f4f036.slice/crio-88f471fd91bf18bb6854c84db1d44a1ff11e6bdab1478b8553059b1a3b09407f WatchSource:0}: Error finding container 88f471fd91bf18bb6854c84db1d44a1ff11e6bdab1478b8553059b1a3b09407f: Status 404 returned error can't find the container with id 88f471fd91bf18bb6854c84db1d44a1ff11e6bdab1478b8553059b1a3b09407f Feb 01 14:38:47 crc kubenswrapper[4820]: I0201 14:38:47.756867 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"06eafee6-9b74-4559-ba89-633ab4f4f036","Type":"ContainerStarted","Data":"88f471fd91bf18bb6854c84db1d44a1ff11e6bdab1478b8553059b1a3b09407f"} Feb 01 14:38:47 crc kubenswrapper[4820]: I0201 14:38:47.759751 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03581fce-bf82-4dd7-8ebe-37e338ba81dc","Type":"ContainerStarted","Data":"8d992f6be8d0bba5496f8ad55edfb600ec80ed3ef0fb4626dc83cae5703b728e"} Feb 01 14:38:47 crc kubenswrapper[4820]: I0201 14:38:47.778686 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.778664521 podStartE2EDuration="2.778664521s" podCreationTimestamp="2026-02-01 14:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:38:47.778089487 +0000 UTC m=+1069.298455771" watchObservedRunningTime="2026-02-01 14:38:47.778664521 +0000 UTC m=+1069.299030805" Feb 01 14:38:48 crc kubenswrapper[4820]: I0201 14:38:48.017064 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 01 14:38:48 crc kubenswrapper[4820]: I0201 14:38:48.380519 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:48 crc kubenswrapper[4820]: I0201 14:38:48.515015 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bf6b5fdf4-gb9fp" Feb 01 14:38:48 crc kubenswrapper[4820]: I0201 14:38:48.576738 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7746bdf84d-qdnfk"] Feb 01 14:38:48 crc kubenswrapper[4820]: I0201 14:38:48.577000 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7746bdf84d-qdnfk" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerName="placement-log" containerID="cri-o://9464f7cf92d57cfe6e206ee66467ccd7976651f763466d5b697503e55d13825d" gracePeriod=30 Feb 01 14:38:48 crc kubenswrapper[4820]: I0201 14:38:48.577402 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7746bdf84d-qdnfk" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerName="placement-api" containerID="cri-o://5408bee028f3417b93b70d66efdeb58a7c0cbc374a978faeed9ab1b73b44c848" gracePeriod=30 Feb 01 14:38:48 crc kubenswrapper[4820]: I0201 14:38:48.770576 4820 generic.go:334] "Generic (PLEG): container finished" podID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerID="9464f7cf92d57cfe6e206ee66467ccd7976651f763466d5b697503e55d13825d" exitCode=143 Feb 01 14:38:48 crc kubenswrapper[4820]: I0201 14:38:48.770681 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7746bdf84d-qdnfk" event={"ID":"337df6da-71eb-4976-993f-b5c45e6ecdcc","Type":"ContainerDied","Data":"9464f7cf92d57cfe6e206ee66467ccd7976651f763466d5b697503e55d13825d"} Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.393343 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.393979 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="ceilometer-central-agent" containerID="cri-o://7cbd242453c301fa3a31d5cacccb321c74f47f5a345754b05a5f2e825fff03e7" gracePeriod=30 Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.394611 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="proxy-httpd" containerID="cri-o://c6842f5e67625d5251d25809ecf1b20983c7facbe0923c3d98f70cb072f7a747" gracePeriod=30 Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.394649 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="ceilometer-notification-agent" containerID="cri-o://ff83f9d864b935236ca972a88ea10cd2856a7f045d007f6e9b1df79a178b0152" gracePeriod=30 Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.394613 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="sg-core" containerID="cri-o://47e65daeb231c7c2ad17b8e7f0b3aff16b728fbd33206931915d3352cced240c" gracePeriod=30 Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.400558 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.405177 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.151:3000/\": EOF" Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.793187 4820 generic.go:334] "Generic (PLEG): container finished" podID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerID="c6842f5e67625d5251d25809ecf1b20983c7facbe0923c3d98f70cb072f7a747" exitCode=0 Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.793215 4820 generic.go:334] "Generic (PLEG): container finished" podID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerID="47e65daeb231c7c2ad17b8e7f0b3aff16b728fbd33206931915d3352cced240c" exitCode=2 Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.793233 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerDied","Data":"c6842f5e67625d5251d25809ecf1b20983c7facbe0923c3d98f70cb072f7a747"} Feb 01 14:38:50 crc kubenswrapper[4820]: I0201 14:38:50.793262 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerDied","Data":"47e65daeb231c7c2ad17b8e7f0b3aff16b728fbd33206931915d3352cced240c"} Feb 01 14:38:51 crc kubenswrapper[4820]: I0201 14:38:51.803712 4820 generic.go:334] "Generic (PLEG): container finished" podID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerID="7cbd242453c301fa3a31d5cacccb321c74f47f5a345754b05a5f2e825fff03e7" exitCode=0 Feb 01 14:38:51 crc kubenswrapper[4820]: I0201 14:38:51.804088 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerDied","Data":"7cbd242453c301fa3a31d5cacccb321c74f47f5a345754b05a5f2e825fff03e7"} Feb 01 14:38:51 crc kubenswrapper[4820]: I0201 14:38:51.805528 4820 generic.go:334] "Generic (PLEG): container finished" podID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerID="5408bee028f3417b93b70d66efdeb58a7c0cbc374a978faeed9ab1b73b44c848" exitCode=0 Feb 01 14:38:51 crc kubenswrapper[4820]: I0201 14:38:51.805555 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7746bdf84d-qdnfk" event={"ID":"337df6da-71eb-4976-993f-b5c45e6ecdcc","Type":"ContainerDied","Data":"5408bee028f3417b93b70d66efdeb58a7c0cbc374a978faeed9ab1b73b44c848"} Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.296278 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ftt9j"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.297840 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.311784 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ftt9j"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.396543 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d7rxb"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.397683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.410781 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d7rxb"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.422820 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xkk\" (UniqueName: \"kubernetes.io/projected/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-kube-api-access-f6xkk\") pod \"nova-api-db-create-ftt9j\" (UID: \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\") " pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.422973 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-operator-scripts\") pod \"nova-api-db-create-ftt9j\" (UID: \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\") " pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.507724 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f893-account-create-update-l5l4m"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.509089 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.511616 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.519939 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q9xjc"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.520957 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.525525 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc56b\" (UniqueName: \"kubernetes.io/projected/a9d299ca-c553-421d-bbd8-7aaebe472a6d-kube-api-access-xc56b\") pod \"nova-cell0-db-create-d7rxb\" (UID: \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\") " pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.525579 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d299ca-c553-421d-bbd8-7aaebe472a6d-operator-scripts\") pod \"nova-cell0-db-create-d7rxb\" (UID: \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\") " pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.525614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6xkk\" (UniqueName: \"kubernetes.io/projected/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-kube-api-access-f6xkk\") pod \"nova-api-db-create-ftt9j\" (UID: \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\") " pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.525693 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-operator-scripts\") pod \"nova-api-db-create-ftt9j\" (UID: \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\") " pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.526982 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-operator-scripts\") pod \"nova-api-db-create-ftt9j\" (UID: \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\") " pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.538497 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f893-account-create-update-l5l4m"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.558714 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6xkk\" (UniqueName: \"kubernetes.io/projected/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-kube-api-access-f6xkk\") pod \"nova-api-db-create-ftt9j\" (UID: \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\") " pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.563797 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q9xjc"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.621246 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.626738 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47h5z\" (UniqueName: \"kubernetes.io/projected/756d7bda-e919-4ca5-8549-80f31cc37ac7-kube-api-access-47h5z\") pod \"nova-api-f893-account-create-update-l5l4m\" (UID: \"756d7bda-e919-4ca5-8549-80f31cc37ac7\") " pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.626816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd6jj\" (UniqueName: \"kubernetes.io/projected/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-kube-api-access-bd6jj\") pod \"nova-cell1-db-create-q9xjc\" (UID: \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\") " pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.626908 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc56b\" (UniqueName: \"kubernetes.io/projected/a9d299ca-c553-421d-bbd8-7aaebe472a6d-kube-api-access-xc56b\") pod \"nova-cell0-db-create-d7rxb\" (UID: \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\") " pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.626942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-operator-scripts\") pod \"nova-cell1-db-create-q9xjc\" (UID: \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\") " pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.626966 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d299ca-c553-421d-bbd8-7aaebe472a6d-operator-scripts\") pod \"nova-cell0-db-create-d7rxb\" (UID: \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\") " pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.627022 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d7bda-e919-4ca5-8549-80f31cc37ac7-operator-scripts\") pod \"nova-api-f893-account-create-update-l5l4m\" (UID: \"756d7bda-e919-4ca5-8549-80f31cc37ac7\") " pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.627776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d299ca-c553-421d-bbd8-7aaebe472a6d-operator-scripts\") pod \"nova-cell0-db-create-d7rxb\" (UID: \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\") " pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.644745 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc56b\" (UniqueName: \"kubernetes.io/projected/a9d299ca-c553-421d-bbd8-7aaebe472a6d-kube-api-access-xc56b\") pod \"nova-cell0-db-create-d7rxb\" (UID: \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\") " pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.702511 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0b48-account-create-update-l599b"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.703639 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.705954 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.718008 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.728492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd6jj\" (UniqueName: \"kubernetes.io/projected/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-kube-api-access-bd6jj\") pod \"nova-cell1-db-create-q9xjc\" (UID: \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\") " pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.728544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-operator-scripts\") pod \"nova-cell1-db-create-q9xjc\" (UID: \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\") " pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.728578 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d7bda-e919-4ca5-8549-80f31cc37ac7-operator-scripts\") pod \"nova-api-f893-account-create-update-l5l4m\" (UID: \"756d7bda-e919-4ca5-8549-80f31cc37ac7\") " pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.728662 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47h5z\" (UniqueName: \"kubernetes.io/projected/756d7bda-e919-4ca5-8549-80f31cc37ac7-kube-api-access-47h5z\") pod \"nova-api-f893-account-create-update-l5l4m\" (UID: \"756d7bda-e919-4ca5-8549-80f31cc37ac7\") " pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.729552 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d7bda-e919-4ca5-8549-80f31cc37ac7-operator-scripts\") pod \"nova-api-f893-account-create-update-l5l4m\" (UID: \"756d7bda-e919-4ca5-8549-80f31cc37ac7\") " pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.730041 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-operator-scripts\") pod \"nova-cell1-db-create-q9xjc\" (UID: \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\") " pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.730110 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b48-account-create-update-l599b"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.746909 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd6jj\" (UniqueName: \"kubernetes.io/projected/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-kube-api-access-bd6jj\") pod \"nova-cell1-db-create-q9xjc\" (UID: \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\") " pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.769735 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47h5z\" (UniqueName: \"kubernetes.io/projected/756d7bda-e919-4ca5-8549-80f31cc37ac7-kube-api-access-47h5z\") pod \"nova-api-f893-account-create-update-l5l4m\" (UID: \"756d7bda-e919-4ca5-8549-80f31cc37ac7\") " pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.830454 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9zrf\" (UniqueName: \"kubernetes.io/projected/f099d076-3429-46a2-8592-97326229034c-kube-api-access-h9zrf\") pod \"nova-cell0-0b48-account-create-update-l599b\" (UID: \"f099d076-3429-46a2-8592-97326229034c\") " pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.830573 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f099d076-3429-46a2-8592-97326229034c-operator-scripts\") pod \"nova-cell0-0b48-account-create-update-l599b\" (UID: \"f099d076-3429-46a2-8592-97326229034c\") " pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.840143 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.902889 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.953738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f099d076-3429-46a2-8592-97326229034c-operator-scripts\") pod \"nova-cell0-0b48-account-create-update-l599b\" (UID: \"f099d076-3429-46a2-8592-97326229034c\") " pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.953855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9zrf\" (UniqueName: \"kubernetes.io/projected/f099d076-3429-46a2-8592-97326229034c-kube-api-access-h9zrf\") pod \"nova-cell0-0b48-account-create-update-l599b\" (UID: \"f099d076-3429-46a2-8592-97326229034c\") " pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.954579 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f099d076-3429-46a2-8592-97326229034c-operator-scripts\") pod \"nova-cell0-0b48-account-create-update-l599b\" (UID: \"f099d076-3429-46a2-8592-97326229034c\") " pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.956398 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e179-account-create-update-mbqvz"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.957472 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.962186 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.964899 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e179-account-create-update-mbqvz"] Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.978061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9zrf\" (UniqueName: \"kubernetes.io/projected/f099d076-3429-46a2-8592-97326229034c-kube-api-access-h9zrf\") pod \"nova-cell0-0b48-account-create-update-l599b\" (UID: \"f099d076-3429-46a2-8592-97326229034c\") " pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:38:52 crc kubenswrapper[4820]: I0201 14:38:52.984081 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:53 crc kubenswrapper[4820]: I0201 14:38:53.028844 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:38:53 crc kubenswrapper[4820]: I0201 14:38:53.057472 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-operator-scripts\") pod \"nova-cell1-e179-account-create-update-mbqvz\" (UID: \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\") " pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:38:53 crc kubenswrapper[4820]: I0201 14:38:53.057808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4slh6\" (UniqueName: \"kubernetes.io/projected/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-kube-api-access-4slh6\") pod \"nova-cell1-e179-account-create-update-mbqvz\" (UID: \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\") " pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:38:53 crc kubenswrapper[4820]: I0201 14:38:53.160707 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4slh6\" (UniqueName: \"kubernetes.io/projected/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-kube-api-access-4slh6\") pod \"nova-cell1-e179-account-create-update-mbqvz\" (UID: \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\") " pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:38:53 crc kubenswrapper[4820]: I0201 14:38:53.160855 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-operator-scripts\") pod \"nova-cell1-e179-account-create-update-mbqvz\" (UID: \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\") " pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:38:53 crc kubenswrapper[4820]: I0201 14:38:53.161596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-operator-scripts\") pod \"nova-cell1-e179-account-create-update-mbqvz\" (UID: \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\") " pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:38:53 crc kubenswrapper[4820]: I0201 14:38:53.183232 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4slh6\" (UniqueName: \"kubernetes.io/projected/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-kube-api-access-4slh6\") pod \"nova-cell1-e179-account-create-update-mbqvz\" (UID: \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\") " pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:38:53 crc kubenswrapper[4820]: I0201 14:38:53.344559 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:38:54 crc kubenswrapper[4820]: I0201 14:38:54.926641 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8bc6c8777-j2rkq" Feb 01 14:38:55 crc kubenswrapper[4820]: I0201 14:38:55.014837 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8466455f76-tpjsw"] Feb 01 14:38:55 crc kubenswrapper[4820]: I0201 14:38:55.015092 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8466455f76-tpjsw" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerName="neutron-api" containerID="cri-o://af4b3c1226e0d0dddba0410c80616963139422b24675727d313dffa36b17bd07" gracePeriod=30 Feb 01 14:38:55 crc kubenswrapper[4820]: I0201 14:38:55.015573 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8466455f76-tpjsw" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerName="neutron-httpd" containerID="cri-o://40f58b40519c58dd81677771afb561ab72657721f0e8be248de46750194b4745" gracePeriod=30 Feb 01 14:38:55 crc kubenswrapper[4820]: I0201 14:38:55.662941 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 01 14:38:55 crc kubenswrapper[4820]: I0201 14:38:55.843049 4820 generic.go:334] "Generic (PLEG): container finished" podID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerID="ff83f9d864b935236ca972a88ea10cd2856a7f045d007f6e9b1df79a178b0152" exitCode=0 Feb 01 14:38:55 crc kubenswrapper[4820]: I0201 14:38:55.843088 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerDied","Data":"ff83f9d864b935236ca972a88ea10cd2856a7f045d007f6e9b1df79a178b0152"} Feb 01 14:38:55 crc kubenswrapper[4820]: I0201 14:38:55.851726 4820 generic.go:334] "Generic (PLEG): container finished" podID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerID="40f58b40519c58dd81677771afb561ab72657721f0e8be248de46750194b4745" exitCode=0 Feb 01 14:38:55 crc kubenswrapper[4820]: I0201 14:38:55.851786 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8466455f76-tpjsw" event={"ID":"5518d4cc-191a-448e-8b7b-adc5ddfe587b","Type":"ContainerDied","Data":"40f58b40519c58dd81677771afb561ab72657721f0e8be248de46750194b4745"} Feb 01 14:38:56 crc kubenswrapper[4820]: I0201 14:38:56.882079 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.151:3000/\": dial tcp 10.217.0.151:3000: connect: connection refused" Feb 01 14:38:57 crc kubenswrapper[4820]: I0201 14:38:57.890045 4820 generic.go:334] "Generic (PLEG): container finished" podID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerID="af4b3c1226e0d0dddba0410c80616963139422b24675727d313dffa36b17bd07" exitCode=0 Feb 01 14:38:57 crc kubenswrapper[4820]: I0201 14:38:57.890098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8466455f76-tpjsw" event={"ID":"5518d4cc-191a-448e-8b7b-adc5ddfe587b","Type":"ContainerDied","Data":"af4b3c1226e0d0dddba0410c80616963139422b24675727d313dffa36b17bd07"} Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.560356 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.571931 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.596276 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655210 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-config\") pod \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655243 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-combined-ca-bundle\") pod \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655271 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c674s\" (UniqueName: \"kubernetes.io/projected/d8255e6c-59ea-4449-bcec-264a12bf6d6e-kube-api-access-c674s\") pod \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655304 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-internal-tls-certs\") pod \"337df6da-71eb-4976-993f-b5c45e6ecdcc\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655320 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-sg-core-conf-yaml\") pod \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655338 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-scripts\") pod \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655356 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-config-data\") pod \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655381 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-config-data\") pod \"337df6da-71eb-4976-993f-b5c45e6ecdcc\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655400 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-scripts\") pod \"337df6da-71eb-4976-993f-b5c45e6ecdcc\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655457 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-httpd-config\") pod \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655490 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-combined-ca-bundle\") pod \"337df6da-71eb-4976-993f-b5c45e6ecdcc\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655565 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-run-httpd\") pod \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-combined-ca-bundle\") pod \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655623 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-log-httpd\") pod \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\" (UID: \"d8255e6c-59ea-4449-bcec-264a12bf6d6e\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655644 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-ovndb-tls-certs\") pod \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655667 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/337df6da-71eb-4976-993f-b5c45e6ecdcc-logs\") pod \"337df6da-71eb-4976-993f-b5c45e6ecdcc\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655688 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df2ck\" (UniqueName: \"kubernetes.io/projected/337df6da-71eb-4976-993f-b5c45e6ecdcc-kube-api-access-df2ck\") pod \"337df6da-71eb-4976-993f-b5c45e6ecdcc\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655715 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nzww\" (UniqueName: \"kubernetes.io/projected/5518d4cc-191a-448e-8b7b-adc5ddfe587b-kube-api-access-8nzww\") pod \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\" (UID: \"5518d4cc-191a-448e-8b7b-adc5ddfe587b\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.655744 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-public-tls-certs\") pod \"337df6da-71eb-4976-993f-b5c45e6ecdcc\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.671464 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-scripts" (OuterVolumeSpecName: "scripts") pod "337df6da-71eb-4976-993f-b5c45e6ecdcc" (UID: "337df6da-71eb-4976-993f-b5c45e6ecdcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.681818 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d8255e6c-59ea-4449-bcec-264a12bf6d6e" (UID: "d8255e6c-59ea-4449-bcec-264a12bf6d6e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.695405 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/337df6da-71eb-4976-993f-b5c45e6ecdcc-logs" (OuterVolumeSpecName: "logs") pod "337df6da-71eb-4976-993f-b5c45e6ecdcc" (UID: "337df6da-71eb-4976-993f-b5c45e6ecdcc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.703002 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d8255e6c-59ea-4449-bcec-264a12bf6d6e" (UID: "d8255e6c-59ea-4449-bcec-264a12bf6d6e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.704024 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5518d4cc-191a-448e-8b7b-adc5ddfe587b" (UID: "5518d4cc-191a-448e-8b7b-adc5ddfe587b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.711558 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/337df6da-71eb-4976-993f-b5c45e6ecdcc-kube-api-access-df2ck" (OuterVolumeSpecName: "kube-api-access-df2ck") pod "337df6da-71eb-4976-993f-b5c45e6ecdcc" (UID: "337df6da-71eb-4976-993f-b5c45e6ecdcc"). InnerVolumeSpecName "kube-api-access-df2ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.725566 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5518d4cc-191a-448e-8b7b-adc5ddfe587b-kube-api-access-8nzww" (OuterVolumeSpecName: "kube-api-access-8nzww") pod "5518d4cc-191a-448e-8b7b-adc5ddfe587b" (UID: "5518d4cc-191a-448e-8b7b-adc5ddfe587b"). InnerVolumeSpecName "kube-api-access-8nzww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.734844 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8255e6c-59ea-4449-bcec-264a12bf6d6e-kube-api-access-c674s" (OuterVolumeSpecName: "kube-api-access-c674s") pod "d8255e6c-59ea-4449-bcec-264a12bf6d6e" (UID: "d8255e6c-59ea-4449-bcec-264a12bf6d6e"). InnerVolumeSpecName "kube-api-access-c674s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.749635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-scripts" (OuterVolumeSpecName: "scripts") pod "d8255e6c-59ea-4449-bcec-264a12bf6d6e" (UID: "d8255e6c-59ea-4449-bcec-264a12bf6d6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.757489 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c674s\" (UniqueName: \"kubernetes.io/projected/d8255e6c-59ea-4449-bcec-264a12bf6d6e-kube-api-access-c674s\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.757531 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.759615 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.759634 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.759646 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.759657 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8255e6c-59ea-4449-bcec-264a12bf6d6e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.759670 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/337df6da-71eb-4976-993f-b5c45e6ecdcc-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.759682 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df2ck\" (UniqueName: \"kubernetes.io/projected/337df6da-71eb-4976-993f-b5c45e6ecdcc-kube-api-access-df2ck\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.759696 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nzww\" (UniqueName: \"kubernetes.io/projected/5518d4cc-191a-448e-8b7b-adc5ddfe587b-kube-api-access-8nzww\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.832789 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-config" (OuterVolumeSpecName: "config") pod "5518d4cc-191a-448e-8b7b-adc5ddfe587b" (UID: "5518d4cc-191a-448e-8b7b-adc5ddfe587b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.853048 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "337df6da-71eb-4976-993f-b5c45e6ecdcc" (UID: "337df6da-71eb-4976-993f-b5c45e6ecdcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.864113 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.864237 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.892399 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d8255e6c-59ea-4449-bcec-264a12bf6d6e" (UID: "d8255e6c-59ea-4449-bcec-264a12bf6d6e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.920581 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8466455f76-tpjsw" event={"ID":"5518d4cc-191a-448e-8b7b-adc5ddfe587b","Type":"ContainerDied","Data":"d991a29cec5528aa23b64309f0770824817b14d171a8bba3455a233c972bbda6"} Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.920638 4820 scope.go:117] "RemoveContainer" containerID="40f58b40519c58dd81677771afb561ab72657721f0e8be248de46750194b4745" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.920774 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8466455f76-tpjsw" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.925615 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7746bdf84d-qdnfk" event={"ID":"337df6da-71eb-4976-993f-b5c45e6ecdcc","Type":"ContainerDied","Data":"10f93b0fb87a0aa171e63aa72de982898dd0243425bfc9dadc330c3be1c58443"} Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.925707 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7746bdf84d-qdnfk" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.932865 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8255e6c-59ea-4449-bcec-264a12bf6d6e","Type":"ContainerDied","Data":"76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c"} Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.933001 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.964337 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "337df6da-71eb-4976-993f-b5c45e6ecdcc" (UID: "337df6da-71eb-4976-993f-b5c45e6ecdcc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.965061 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-public-tls-certs\") pod \"337df6da-71eb-4976-993f-b5c45e6ecdcc\" (UID: \"337df6da-71eb-4976-993f-b5c45e6ecdcc\") " Feb 01 14:38:58 crc kubenswrapper[4820]: W0201 14:38:58.965372 4820 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/337df6da-71eb-4976-993f-b5c45e6ecdcc/volumes/kubernetes.io~secret/public-tls-certs Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.965386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "337df6da-71eb-4976-993f-b5c45e6ecdcc" (UID: "337df6da-71eb-4976-993f-b5c45e6ecdcc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.966173 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.966196 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.986581 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-config-data" (OuterVolumeSpecName: "config-data") pod "337df6da-71eb-4976-993f-b5c45e6ecdcc" (UID: "337df6da-71eb-4976-993f-b5c45e6ecdcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.989318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5518d4cc-191a-448e-8b7b-adc5ddfe587b" (UID: "5518d4cc-191a-448e-8b7b-adc5ddfe587b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:58 crc kubenswrapper[4820]: I0201 14:38:58.993194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d7rxb"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.048003 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-config-data" (OuterVolumeSpecName: "config-data") pod "d8255e6c-59ea-4449-bcec-264a12bf6d6e" (UID: "d8255e6c-59ea-4449-bcec-264a12bf6d6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.049062 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "337df6da-71eb-4976-993f-b5c45e6ecdcc" (UID: "337df6da-71eb-4976-993f-b5c45e6ecdcc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.060571 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8255e6c-59ea-4449-bcec-264a12bf6d6e" (UID: "d8255e6c-59ea-4449-bcec-264a12bf6d6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.066154 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5518d4cc-191a-448e-8b7b-adc5ddfe587b" (UID: "5518d4cc-191a-448e-8b7b-adc5ddfe587b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.068419 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.068636 4820 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.068786 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5518d4cc-191a-448e-8b7b-adc5ddfe587b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.068900 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.068979 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8255e6c-59ea-4449-bcec-264a12bf6d6e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.069050 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337df6da-71eb-4976-993f-b5c45e6ecdcc-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.143622 4820 scope.go:117] "RemoveContainer" containerID="af4b3c1226e0d0dddba0410c80616963139422b24675727d313dffa36b17bd07" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.166289 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q9xjc"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.168360 4820 scope.go:117] "RemoveContainer" containerID="5408bee028f3417b93b70d66efdeb58a7c0cbc374a978faeed9ab1b73b44c848" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.275563 4820 scope.go:117] "RemoveContainer" containerID="9464f7cf92d57cfe6e206ee66467ccd7976651f763466d5b697503e55d13825d" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.316611 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ftt9j"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.337131 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e179-account-create-update-mbqvz"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.345822 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.349458 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f893-account-create-update-l5l4m"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.352996 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.361947 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b48-account-create-update-l599b"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.382800 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.393069 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.400114 4820 scope.go:117] "RemoveContainer" containerID="c6842f5e67625d5251d25809ecf1b20983c7facbe0923c3d98f70cb072f7a747" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.404934 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.413936 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8466455f76-tpjsw"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.423532 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8466455f76-tpjsw"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.431627 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:59 crc kubenswrapper[4820]: E0201 14:38:59.432845 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerName="placement-api" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.432865 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerName="placement-api" Feb 01 14:38:59 crc kubenswrapper[4820]: E0201 14:38:59.432999 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="ceilometer-central-agent" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433011 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="ceilometer-central-agent" Feb 01 14:38:59 crc kubenswrapper[4820]: E0201 14:38:59.433021 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerName="placement-log" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433027 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerName="placement-log" Feb 01 14:38:59 crc kubenswrapper[4820]: E0201 14:38:59.433055 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="ceilometer-notification-agent" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433061 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="ceilometer-notification-agent" Feb 01 14:38:59 crc kubenswrapper[4820]: E0201 14:38:59.433073 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerName="neutron-api" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433078 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerName="neutron-api" Feb 01 14:38:59 crc kubenswrapper[4820]: E0201 14:38:59.433088 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="proxy-httpd" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433095 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="proxy-httpd" Feb 01 14:38:59 crc kubenswrapper[4820]: E0201 14:38:59.433106 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerName="neutron-httpd" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433111 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerName="neutron-httpd" Feb 01 14:38:59 crc kubenswrapper[4820]: E0201 14:38:59.433122 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="sg-core" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433128 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="sg-core" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433295 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerName="neutron-api" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433307 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" containerName="neutron-httpd" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433322 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerName="placement-api" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433342 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="ceilometer-central-agent" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433347 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" containerName="placement-log" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433359 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="sg-core" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433367 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="proxy-httpd" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.433376 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" containerName="ceilometer-notification-agent" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.435374 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.438439 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.438548 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.442854 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.462226 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7746bdf84d-qdnfk"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.476568 4820 scope.go:117] "RemoveContainer" containerID="47e65daeb231c7c2ad17b8e7f0b3aff16b728fbd33206931915d3352cced240c" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.477033 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7746bdf84d-qdnfk"] Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.500046 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9jd8\" (UniqueName: \"kubernetes.io/projected/3d3e8287-0a4a-44fe-8b72-41276a73ad66-kube-api-access-b9jd8\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.500145 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-scripts\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.500196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-config-data\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.500238 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-log-httpd\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.500265 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.500308 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.500337 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-run-httpd\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.529197 4820 scope.go:117] "RemoveContainer" containerID="ff83f9d864b935236ca972a88ea10cd2856a7f045d007f6e9b1df79a178b0152" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.602279 4820 scope.go:117] "RemoveContainer" containerID="7cbd242453c301fa3a31d5cacccb321c74f47f5a345754b05a5f2e825fff03e7" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.602421 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-config-data\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.602479 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-log-httpd\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.602506 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.602541 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.602560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-run-httpd\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.602619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9jd8\" (UniqueName: \"kubernetes.io/projected/3d3e8287-0a4a-44fe-8b72-41276a73ad66-kube-api-access-b9jd8\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.602680 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-scripts\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.603069 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-run-httpd\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.603380 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-log-httpd\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.616914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.618138 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-config-data\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.620413 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-scripts\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.621085 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9jd8\" (UniqueName: \"kubernetes.io/projected/3d3e8287-0a4a-44fe-8b72-41276a73ad66-kube-api-access-b9jd8\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.623286 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.777105 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.944379 4820 generic.go:334] "Generic (PLEG): container finished" podID="86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e" containerID="6f3247d86f67150168b22aa6a199e5c3a5a4b9ea1d4f28b6b3e34db834970357" exitCode=0 Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.944731 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ftt9j" event={"ID":"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e","Type":"ContainerDied","Data":"6f3247d86f67150168b22aa6a199e5c3a5a4b9ea1d4f28b6b3e34db834970357"} Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.944762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ftt9j" event={"ID":"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e","Type":"ContainerStarted","Data":"bccf5dd6d0398c31c4b324ce162101ea40f03b721ef46b933726f6c4ebef42ea"} Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.948596 4820 generic.go:334] "Generic (PLEG): container finished" podID="a9d299ca-c553-421d-bbd8-7aaebe472a6d" containerID="52a16d845830e10f1838d4cf45af8bd8bd00fbd402a134ade724aa12c06f8342" exitCode=0 Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.948747 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7rxb" event={"ID":"a9d299ca-c553-421d-bbd8-7aaebe472a6d","Type":"ContainerDied","Data":"52a16d845830e10f1838d4cf45af8bd8bd00fbd402a134ade724aa12c06f8342"} Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.948782 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7rxb" event={"ID":"a9d299ca-c553-421d-bbd8-7aaebe472a6d","Type":"ContainerStarted","Data":"7166a8b26262c585a6a63b3c0f958d715038572fb13b9aa6a8348dfc5f909c70"} Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.952518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"06eafee6-9b74-4559-ba89-633ab4f4f036","Type":"ContainerStarted","Data":"d0498dd5ee07f57becee89dc70037dadb91e6146482179b98d65d2c8d0e660f4"} Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.979314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f893-account-create-update-l5l4m" event={"ID":"756d7bda-e919-4ca5-8549-80f31cc37ac7","Type":"ContainerStarted","Data":"08bd69bcf00d21e3810db33270484aab1dacee8a10179b09f8452023b490e322"} Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.979366 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f893-account-create-update-l5l4m" event={"ID":"756d7bda-e919-4ca5-8549-80f31cc37ac7","Type":"ContainerStarted","Data":"289def72448881448cd16fb56ff7d520ca7df893b5e124286045a10132fbf59b"} Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.980720 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.244602159 podStartE2EDuration="14.980702327s" podCreationTimestamp="2026-02-01 14:38:45 +0000 UTC" firstStartedPulling="2026-02-01 14:38:46.796948033 +0000 UTC m=+1068.317314317" lastFinishedPulling="2026-02-01 14:38:58.533048201 +0000 UTC m=+1080.053414485" observedRunningTime="2026-02-01 14:38:59.980548312 +0000 UTC m=+1081.500914606" watchObservedRunningTime="2026-02-01 14:38:59.980702327 +0000 UTC m=+1081.501068601" Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.985107 4820 generic.go:334] "Generic (PLEG): container finished" podID="f099d076-3429-46a2-8592-97326229034c" containerID="1549ff0345fd19a65d575dda8d07d9198415fd693a6c677d5e6c0758f760611d" exitCode=0 Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.985215 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b48-account-create-update-l599b" event={"ID":"f099d076-3429-46a2-8592-97326229034c","Type":"ContainerDied","Data":"1549ff0345fd19a65d575dda8d07d9198415fd693a6c677d5e6c0758f760611d"} Feb 01 14:38:59 crc kubenswrapper[4820]: I0201 14:38:59.985245 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b48-account-create-update-l599b" event={"ID":"f099d076-3429-46a2-8592-97326229034c","Type":"ContainerStarted","Data":"5462e262bd2ee2e25f308deb30f6ab300f82255e4c209c62edf291df16cb7030"} Feb 01 14:39:00 crc kubenswrapper[4820]: I0201 14:38:59.999931 4820 generic.go:334] "Generic (PLEG): container finished" podID="093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2" containerID="c8d1951d4ed50234a117e9798814a5da75b8b7e9e34963963b674913b31b7f8b" exitCode=0 Feb 01 14:39:00 crc kubenswrapper[4820]: I0201 14:39:00.000159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q9xjc" event={"ID":"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2","Type":"ContainerDied","Data":"c8d1951d4ed50234a117e9798814a5da75b8b7e9e34963963b674913b31b7f8b"} Feb 01 14:39:00 crc kubenswrapper[4820]: I0201 14:39:00.000199 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q9xjc" event={"ID":"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2","Type":"ContainerStarted","Data":"568ffbcedc465373db37b76c59bb278374bb9ecfe2e2b2a6d58c46ba3732d06b"} Feb 01 14:39:00 crc kubenswrapper[4820]: I0201 14:39:00.021375 4820 generic.go:334] "Generic (PLEG): container finished" podID="75db30c4-7da9-44d6-ad1a-02b67d4e8a85" containerID="83f0a12e8039dcd8c3e9300c2d07347c0272cb54539306ef84e85e16f36aeb3c" exitCode=0 Feb 01 14:39:00 crc kubenswrapper[4820]: I0201 14:39:00.021414 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e179-account-create-update-mbqvz" event={"ID":"75db30c4-7da9-44d6-ad1a-02b67d4e8a85","Type":"ContainerDied","Data":"83f0a12e8039dcd8c3e9300c2d07347c0272cb54539306ef84e85e16f36aeb3c"} Feb 01 14:39:00 crc kubenswrapper[4820]: I0201 14:39:00.021433 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e179-account-create-update-mbqvz" event={"ID":"75db30c4-7da9-44d6-ad1a-02b67d4e8a85","Type":"ContainerStarted","Data":"6911f96421b6f5b87caad3b29c8b4112fd0007ef6ddbc4b5d47bb2134ccaa0c5"} Feb 01 14:39:00 crc kubenswrapper[4820]: I0201 14:39:00.217558 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.030749 4820 generic.go:334] "Generic (PLEG): container finished" podID="756d7bda-e919-4ca5-8549-80f31cc37ac7" containerID="08bd69bcf00d21e3810db33270484aab1dacee8a10179b09f8452023b490e322" exitCode=0 Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.030830 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f893-account-create-update-l5l4m" event={"ID":"756d7bda-e919-4ca5-8549-80f31cc37ac7","Type":"ContainerDied","Data":"08bd69bcf00d21e3810db33270484aab1dacee8a10179b09f8452023b490e322"} Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.033426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerStarted","Data":"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4"} Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.033464 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerStarted","Data":"84b69b80472d83639b38577b729d8fd0c55967744e10695d0371ed9ea6e8ec2b"} Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.215615 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="337df6da-71eb-4976-993f-b5c45e6ecdcc" path="/var/lib/kubelet/pods/337df6da-71eb-4976-993f-b5c45e6ecdcc/volumes" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.216369 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5518d4cc-191a-448e-8b7b-adc5ddfe587b" path="/var/lib/kubelet/pods/5518d4cc-191a-448e-8b7b-adc5ddfe587b/volumes" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.217657 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8255e6c-59ea-4449-bcec-264a12bf6d6e" path="/var/lib/kubelet/pods/d8255e6c-59ea-4449-bcec-264a12bf6d6e/volumes" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.388994 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.438347 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc56b\" (UniqueName: \"kubernetes.io/projected/a9d299ca-c553-421d-bbd8-7aaebe472a6d-kube-api-access-xc56b\") pod \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\" (UID: \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.438449 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d299ca-c553-421d-bbd8-7aaebe472a6d-operator-scripts\") pod \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\" (UID: \"a9d299ca-c553-421d-bbd8-7aaebe472a6d\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.439698 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d299ca-c553-421d-bbd8-7aaebe472a6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9d299ca-c553-421d-bbd8-7aaebe472a6d" (UID: "a9d299ca-c553-421d-bbd8-7aaebe472a6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.453083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d299ca-c553-421d-bbd8-7aaebe472a6d-kube-api-access-xc56b" (OuterVolumeSpecName: "kube-api-access-xc56b") pod "a9d299ca-c553-421d-bbd8-7aaebe472a6d" (UID: "a9d299ca-c553-421d-bbd8-7aaebe472a6d"). InnerVolumeSpecName "kube-api-access-xc56b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.540325 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d299ca-c553-421d-bbd8-7aaebe472a6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.540365 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc56b\" (UniqueName: \"kubernetes.io/projected/a9d299ca-c553-421d-bbd8-7aaebe472a6d-kube-api-access-xc56b\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.562110 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.641411 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-operator-scripts\") pod \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\" (UID: \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.641580 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd6jj\" (UniqueName: \"kubernetes.io/projected/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-kube-api-access-bd6jj\") pod \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\" (UID: \"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.643419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2" (UID: "093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.646196 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-kube-api-access-bd6jj" (OuterVolumeSpecName: "kube-api-access-bd6jj") pod "093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2" (UID: "093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2"). InnerVolumeSpecName "kube-api-access-bd6jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.702595 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.723633 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.730136 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.745038 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75db30c4-7da9-44d6-ad1a-02b67d4e8a85" (UID: "75db30c4-7da9-44d6-ad1a-02b67d4e8a85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.745099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-operator-scripts\") pod \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\" (UID: \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.745365 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4slh6\" (UniqueName: \"kubernetes.io/projected/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-kube-api-access-4slh6\") pod \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\" (UID: \"75db30c4-7da9-44d6-ad1a-02b67d4e8a85\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.745829 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd6jj\" (UniqueName: \"kubernetes.io/projected/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-kube-api-access-bd6jj\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.745859 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.745868 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.749270 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-kube-api-access-4slh6" (OuterVolumeSpecName: "kube-api-access-4slh6") pod "75db30c4-7da9-44d6-ad1a-02b67d4e8a85" (UID: "75db30c4-7da9-44d6-ad1a-02b67d4e8a85"). InnerVolumeSpecName "kube-api-access-4slh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.750468 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.847510 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9zrf\" (UniqueName: \"kubernetes.io/projected/f099d076-3429-46a2-8592-97326229034c-kube-api-access-h9zrf\") pod \"f099d076-3429-46a2-8592-97326229034c\" (UID: \"f099d076-3429-46a2-8592-97326229034c\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.847816 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47h5z\" (UniqueName: \"kubernetes.io/projected/756d7bda-e919-4ca5-8549-80f31cc37ac7-kube-api-access-47h5z\") pod \"756d7bda-e919-4ca5-8549-80f31cc37ac7\" (UID: \"756d7bda-e919-4ca5-8549-80f31cc37ac7\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.848153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6xkk\" (UniqueName: \"kubernetes.io/projected/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-kube-api-access-f6xkk\") pod \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\" (UID: \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.848310 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d7bda-e919-4ca5-8549-80f31cc37ac7-operator-scripts\") pod \"756d7bda-e919-4ca5-8549-80f31cc37ac7\" (UID: \"756d7bda-e919-4ca5-8549-80f31cc37ac7\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.848387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-operator-scripts\") pod \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\" (UID: \"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.848520 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f099d076-3429-46a2-8592-97326229034c-operator-scripts\") pod \"f099d076-3429-46a2-8592-97326229034c\" (UID: \"f099d076-3429-46a2-8592-97326229034c\") " Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.848821 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/756d7bda-e919-4ca5-8549-80f31cc37ac7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "756d7bda-e919-4ca5-8549-80f31cc37ac7" (UID: "756d7bda-e919-4ca5-8549-80f31cc37ac7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.849141 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e" (UID: "86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.849330 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4slh6\" (UniqueName: \"kubernetes.io/projected/75db30c4-7da9-44d6-ad1a-02b67d4e8a85-kube-api-access-4slh6\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.849439 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/756d7bda-e919-4ca5-8549-80f31cc37ac7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.849367 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f099d076-3429-46a2-8592-97326229034c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f099d076-3429-46a2-8592-97326229034c" (UID: "f099d076-3429-46a2-8592-97326229034c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.850548 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756d7bda-e919-4ca5-8549-80f31cc37ac7-kube-api-access-47h5z" (OuterVolumeSpecName: "kube-api-access-47h5z") pod "756d7bda-e919-4ca5-8549-80f31cc37ac7" (UID: "756d7bda-e919-4ca5-8549-80f31cc37ac7"). InnerVolumeSpecName "kube-api-access-47h5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.853308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f099d076-3429-46a2-8592-97326229034c-kube-api-access-h9zrf" (OuterVolumeSpecName: "kube-api-access-h9zrf") pod "f099d076-3429-46a2-8592-97326229034c" (UID: "f099d076-3429-46a2-8592-97326229034c"). InnerVolumeSpecName "kube-api-access-h9zrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.860494 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-kube-api-access-f6xkk" (OuterVolumeSpecName: "kube-api-access-f6xkk") pod "86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e" (UID: "86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e"). InnerVolumeSpecName "kube-api-access-f6xkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.951817 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47h5z\" (UniqueName: \"kubernetes.io/projected/756d7bda-e919-4ca5-8549-80f31cc37ac7-kube-api-access-47h5z\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.952109 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6xkk\" (UniqueName: \"kubernetes.io/projected/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-kube-api-access-f6xkk\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.952192 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.952270 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f099d076-3429-46a2-8592-97326229034c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:01 crc kubenswrapper[4820]: I0201 14:39:01.952341 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9zrf\" (UniqueName: \"kubernetes.io/projected/f099d076-3429-46a2-8592-97326229034c-kube-api-access-h9zrf\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.045518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q9xjc" event={"ID":"093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2","Type":"ContainerDied","Data":"568ffbcedc465373db37b76c59bb278374bb9ecfe2e2b2a6d58c46ba3732d06b"} Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.046603 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="568ffbcedc465373db37b76c59bb278374bb9ecfe2e2b2a6d58c46ba3732d06b" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.045543 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q9xjc" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.047555 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7rxb" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.047548 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7rxb" event={"ID":"a9d299ca-c553-421d-bbd8-7aaebe472a6d","Type":"ContainerDied","Data":"7166a8b26262c585a6a63b3c0f958d715038572fb13b9aa6a8348dfc5f909c70"} Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.047685 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7166a8b26262c585a6a63b3c0f958d715038572fb13b9aa6a8348dfc5f909c70" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.049426 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e179-account-create-update-mbqvz" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.049425 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e179-account-create-update-mbqvz" event={"ID":"75db30c4-7da9-44d6-ad1a-02b67d4e8a85","Type":"ContainerDied","Data":"6911f96421b6f5b87caad3b29c8b4112fd0007ef6ddbc4b5d47bb2134ccaa0c5"} Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.049582 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6911f96421b6f5b87caad3b29c8b4112fd0007ef6ddbc4b5d47bb2134ccaa0c5" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.051232 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f893-account-create-update-l5l4m" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.051249 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f893-account-create-update-l5l4m" event={"ID":"756d7bda-e919-4ca5-8549-80f31cc37ac7","Type":"ContainerDied","Data":"289def72448881448cd16fb56ff7d520ca7df893b5e124286045a10132fbf59b"} Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.051276 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289def72448881448cd16fb56ff7d520ca7df893b5e124286045a10132fbf59b" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.053398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerStarted","Data":"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f"} Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.055074 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b48-account-create-update-l599b" event={"ID":"f099d076-3429-46a2-8592-97326229034c","Type":"ContainerDied","Data":"5462e262bd2ee2e25f308deb30f6ab300f82255e4c209c62edf291df16cb7030"} Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.055110 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5462e262bd2ee2e25f308deb30f6ab300f82255e4c209c62edf291df16cb7030" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.055183 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b48-account-create-update-l599b" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.059948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ftt9j" event={"ID":"86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e","Type":"ContainerDied","Data":"bccf5dd6d0398c31c4b324ce162101ea40f03b721ef46b933726f6c4ebef42ea"} Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.059997 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bccf5dd6d0398c31c4b324ce162101ea40f03b721ef46b933726f6c4ebef42ea" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.060053 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ftt9j" Feb 01 14:39:02 crc kubenswrapper[4820]: I0201 14:39:02.356115 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:03 crc kubenswrapper[4820]: E0201 14:39:03.963310 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8255e6c_59ea_4449_bcec_264a12bf6d6e.slice/crio-76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c\": RecentStats: unable to find data in memory cache]" Feb 01 14:39:05 crc kubenswrapper[4820]: I0201 14:39:05.083446 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerStarted","Data":"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5"} Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.907997 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qqtrm"] Feb 01 14:39:07 crc kubenswrapper[4820]: E0201 14:39:07.909053 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909071 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: E0201 14:39:07.909092 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d299ca-c553-421d-bbd8-7aaebe472a6d" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909100 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d299ca-c553-421d-bbd8-7aaebe472a6d" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: E0201 14:39:07.909114 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75db30c4-7da9-44d6-ad1a-02b67d4e8a85" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909123 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="75db30c4-7da9-44d6-ad1a-02b67d4e8a85" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: E0201 14:39:07.909143 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f099d076-3429-46a2-8592-97326229034c" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909151 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f099d076-3429-46a2-8592-97326229034c" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: E0201 14:39:07.909170 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909178 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: E0201 14:39:07.909193 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756d7bda-e919-4ca5-8549-80f31cc37ac7" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909200 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="756d7bda-e919-4ca5-8549-80f31cc37ac7" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909400 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909415 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909559 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d299ca-c553-421d-bbd8-7aaebe472a6d" containerName="mariadb-database-create" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909575 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="756d7bda-e919-4ca5-8549-80f31cc37ac7" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909592 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f099d076-3429-46a2-8592-97326229034c" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.909605 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="75db30c4-7da9-44d6-ad1a-02b67d4e8a85" containerName="mariadb-account-create-update" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.910268 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.916727 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8fs5g" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.916950 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.919167 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.927862 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qqtrm"] Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.976192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.976243 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4wh\" (UniqueName: \"kubernetes.io/projected/56ebbdff-635e-4998-b84f-04dbe869ab4e-kube-api-access-nb4wh\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.976283 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-scripts\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:07 crc kubenswrapper[4820]: I0201 14:39:07.976365 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-config-data\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.077735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.077819 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4wh\" (UniqueName: \"kubernetes.io/projected/56ebbdff-635e-4998-b84f-04dbe869ab4e-kube-api-access-nb4wh\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.077905 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-scripts\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.078006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-config-data\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.085868 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-scripts\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.086032 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-config-data\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.086346 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.095670 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4wh\" (UniqueName: \"kubernetes.io/projected/56ebbdff-635e-4998-b84f-04dbe869ab4e-kube-api-access-nb4wh\") pod \"nova-cell0-conductor-db-sync-qqtrm\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.116567 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerStarted","Data":"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94"} Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.116748 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="ceilometer-central-agent" containerID="cri-o://0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4" gracePeriod=30 Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.116793 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="proxy-httpd" containerID="cri-o://479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94" gracePeriod=30 Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.116814 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.116822 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="sg-core" containerID="cri-o://ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5" gracePeriod=30 Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.116857 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="ceilometer-notification-agent" containerID="cri-o://609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f" gracePeriod=30 Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.271557 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.722471 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.889712749 podStartE2EDuration="9.722450921s" podCreationTimestamp="2026-02-01 14:38:59 +0000 UTC" firstStartedPulling="2026-02-01 14:39:00.222672749 +0000 UTC m=+1081.743039033" lastFinishedPulling="2026-02-01 14:39:07.055410921 +0000 UTC m=+1088.575777205" observedRunningTime="2026-02-01 14:39:08.151574587 +0000 UTC m=+1089.671940871" watchObservedRunningTime="2026-02-01 14:39:08.722450921 +0000 UTC m=+1090.242817205" Feb 01 14:39:08 crc kubenswrapper[4820]: W0201 14:39:08.724119 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56ebbdff_635e_4998_b84f_04dbe869ab4e.slice/crio-e5dc69bb31b6121dbf395cdd71b655606add04e3da11fd1fd83d40437f9be4cb WatchSource:0}: Error finding container e5dc69bb31b6121dbf395cdd71b655606add04e3da11fd1fd83d40437f9be4cb: Status 404 returned error can't find the container with id e5dc69bb31b6121dbf395cdd71b655606add04e3da11fd1fd83d40437f9be4cb Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.725194 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qqtrm"] Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.793447 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891046 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9jd8\" (UniqueName: \"kubernetes.io/projected/3d3e8287-0a4a-44fe-8b72-41276a73ad66-kube-api-access-b9jd8\") pod \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-sg-core-conf-yaml\") pod \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891166 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-combined-ca-bundle\") pod \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891183 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-scripts\") pod \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891234 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-config-data\") pod \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891279 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-log-httpd\") pod \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891325 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-run-httpd\") pod \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\" (UID: \"3d3e8287-0a4a-44fe-8b72-41276a73ad66\") " Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891947 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3d3e8287-0a4a-44fe-8b72-41276a73ad66" (UID: "3d3e8287-0a4a-44fe-8b72-41276a73ad66"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.891936 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3d3e8287-0a4a-44fe-8b72-41276a73ad66" (UID: "3d3e8287-0a4a-44fe-8b72-41276a73ad66"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.896097 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3e8287-0a4a-44fe-8b72-41276a73ad66-kube-api-access-b9jd8" (OuterVolumeSpecName: "kube-api-access-b9jd8") pod "3d3e8287-0a4a-44fe-8b72-41276a73ad66" (UID: "3d3e8287-0a4a-44fe-8b72-41276a73ad66"). InnerVolumeSpecName "kube-api-access-b9jd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.896312 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-scripts" (OuterVolumeSpecName: "scripts") pod "3d3e8287-0a4a-44fe-8b72-41276a73ad66" (UID: "3d3e8287-0a4a-44fe-8b72-41276a73ad66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.916471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3d3e8287-0a4a-44fe-8b72-41276a73ad66" (UID: "3d3e8287-0a4a-44fe-8b72-41276a73ad66"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.957761 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d3e8287-0a4a-44fe-8b72-41276a73ad66" (UID: "3d3e8287-0a4a-44fe-8b72-41276a73ad66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.970208 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-config-data" (OuterVolumeSpecName: "config-data") pod "3d3e8287-0a4a-44fe-8b72-41276a73ad66" (UID: "3d3e8287-0a4a-44fe-8b72-41276a73ad66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.994098 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.994135 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.994150 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.994164 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3e8287-0a4a-44fe-8b72-41276a73ad66-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.994176 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.994185 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d3e8287-0a4a-44fe-8b72-41276a73ad66-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:08 crc kubenswrapper[4820]: I0201 14:39:08.994194 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9jd8\" (UniqueName: \"kubernetes.io/projected/3d3e8287-0a4a-44fe-8b72-41276a73ad66-kube-api-access-b9jd8\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.151458 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" event={"ID":"56ebbdff-635e-4998-b84f-04dbe869ab4e","Type":"ContainerStarted","Data":"e5dc69bb31b6121dbf395cdd71b655606add04e3da11fd1fd83d40437f9be4cb"} Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162346 4820 generic.go:334] "Generic (PLEG): container finished" podID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerID="479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94" exitCode=0 Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162388 4820 generic.go:334] "Generic (PLEG): container finished" podID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerID="ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5" exitCode=2 Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162397 4820 generic.go:334] "Generic (PLEG): container finished" podID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerID="609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f" exitCode=0 Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162405 4820 generic.go:334] "Generic (PLEG): container finished" podID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerID="0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4" exitCode=0 Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerDied","Data":"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94"} Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerDied","Data":"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5"} Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162483 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerDied","Data":"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f"} Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162505 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerDied","Data":"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4"} Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d3e8287-0a4a-44fe-8b72-41276a73ad66","Type":"ContainerDied","Data":"84b69b80472d83639b38577b729d8fd0c55967744e10695d0371ed9ea6e8ec2b"} Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162536 4820 scope.go:117] "RemoveContainer" containerID="479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.162919 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.186429 4820 scope.go:117] "RemoveContainer" containerID="ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.210715 4820 scope.go:117] "RemoveContainer" containerID="609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.211302 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.232284 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.243393 4820 scope.go:117] "RemoveContainer" containerID="0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.251324 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:09 crc kubenswrapper[4820]: E0201 14:39:09.251754 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="proxy-httpd" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.251778 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="proxy-httpd" Feb 01 14:39:09 crc kubenswrapper[4820]: E0201 14:39:09.251791 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="ceilometer-central-agent" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.251797 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="ceilometer-central-agent" Feb 01 14:39:09 crc kubenswrapper[4820]: E0201 14:39:09.251814 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="ceilometer-notification-agent" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.251820 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="ceilometer-notification-agent" Feb 01 14:39:09 crc kubenswrapper[4820]: E0201 14:39:09.251830 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="sg-core" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.251835 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="sg-core" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.252342 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="sg-core" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.252379 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="ceilometer-notification-agent" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.252394 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="proxy-httpd" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.252405 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" containerName="ceilometer-central-agent" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.254594 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.258226 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.258557 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.280631 4820 scope.go:117] "RemoveContainer" containerID="479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94" Feb 01 14:39:09 crc kubenswrapper[4820]: E0201 14:39:09.281208 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": container with ID starting with 479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94 not found: ID does not exist" containerID="479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.281259 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94"} err="failed to get container status \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": rpc error: code = NotFound desc = could not find container \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": container with ID starting with 479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.281287 4820 scope.go:117] "RemoveContainer" containerID="ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5" Feb 01 14:39:09 crc kubenswrapper[4820]: E0201 14:39:09.281534 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": container with ID starting with ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5 not found: ID does not exist" containerID="ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.281587 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5"} err="failed to get container status \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": rpc error: code = NotFound desc = could not find container \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": container with ID starting with ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.281614 4820 scope.go:117] "RemoveContainer" containerID="609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f" Feb 01 14:39:09 crc kubenswrapper[4820]: E0201 14:39:09.282167 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": container with ID starting with 609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f not found: ID does not exist" containerID="609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.282224 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f"} err="failed to get container status \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": rpc error: code = NotFound desc = could not find container \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": container with ID starting with 609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.282249 4820 scope.go:117] "RemoveContainer" containerID="0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4" Feb 01 14:39:09 crc kubenswrapper[4820]: E0201 14:39:09.282542 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": container with ID starting with 0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4 not found: ID does not exist" containerID="0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.282563 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4"} err="failed to get container status \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": rpc error: code = NotFound desc = could not find container \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": container with ID starting with 0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.282576 4820 scope.go:117] "RemoveContainer" containerID="479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.282969 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94"} err="failed to get container status \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": rpc error: code = NotFound desc = could not find container \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": container with ID starting with 479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.282996 4820 scope.go:117] "RemoveContainer" containerID="ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.283688 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5"} err="failed to get container status \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": rpc error: code = NotFound desc = could not find container \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": container with ID starting with ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.283736 4820 scope.go:117] "RemoveContainer" containerID="609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.285289 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f"} err="failed to get container status \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": rpc error: code = NotFound desc = could not find container \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": container with ID starting with 609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.285730 4820 scope.go:117] "RemoveContainer" containerID="0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.286024 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4"} err="failed to get container status \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": rpc error: code = NotFound desc = could not find container \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": container with ID starting with 0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.286044 4820 scope.go:117] "RemoveContainer" containerID="479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.286355 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94"} err="failed to get container status \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": rpc error: code = NotFound desc = could not find container \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": container with ID starting with 479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.286422 4820 scope.go:117] "RemoveContainer" containerID="ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.286908 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5"} err="failed to get container status \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": rpc error: code = NotFound desc = could not find container \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": container with ID starting with ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.286933 4820 scope.go:117] "RemoveContainer" containerID="609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.287164 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f"} err="failed to get container status \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": rpc error: code = NotFound desc = could not find container \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": container with ID starting with 609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.287183 4820 scope.go:117] "RemoveContainer" containerID="0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.287424 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4"} err="failed to get container status \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": rpc error: code = NotFound desc = could not find container \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": container with ID starting with 0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.287463 4820 scope.go:117] "RemoveContainer" containerID="479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.287757 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94"} err="failed to get container status \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": rpc error: code = NotFound desc = could not find container \"479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94\": container with ID starting with 479d4f5fe16f25e19ac529ba3c656e715da043d142a5bcaf0a2be1a6d4525e94 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.287777 4820 scope.go:117] "RemoveContainer" containerID="ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.288023 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5"} err="failed to get container status \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": rpc error: code = NotFound desc = could not find container \"ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5\": container with ID starting with ad123039eecb76f606459e76d3670b6adb5df0d54dbfd56c18f4f3c56434dcf5 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.288060 4820 scope.go:117] "RemoveContainer" containerID="609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.288312 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f"} err="failed to get container status \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": rpc error: code = NotFound desc = could not find container \"609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f\": container with ID starting with 609fc1eed094e5043079c2373dbd9ff2970318edcce67f34378db4f2e5d50c0f not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.288333 4820 scope.go:117] "RemoveContainer" containerID="0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.288580 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4"} err="failed to get container status \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": rpc error: code = NotFound desc = could not find container \"0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4\": container with ID starting with 0c160caa07f630859b495f7bc3cfc57923e202fb6b5e673ef351d04a64985ef4 not found: ID does not exist" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.292198 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.444926 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.444989 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-config-data\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.445050 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-scripts\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.445089 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-log-httpd\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.445153 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7kb\" (UniqueName: \"kubernetes.io/projected/dd47f21b-7802-4484-95f4-c7254be818eb-kube-api-access-rl7kb\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.445374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.445486 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-run-httpd\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.547132 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.547195 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-run-httpd\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.547271 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.547299 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-config-data\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.547339 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-scripts\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.547384 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-log-httpd\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.547415 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl7kb\" (UniqueName: \"kubernetes.io/projected/dd47f21b-7802-4484-95f4-c7254be818eb-kube-api-access-rl7kb\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.548679 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-run-httpd\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.549339 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-log-httpd\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.554642 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-scripts\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.556439 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.556547 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.557381 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-config-data\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.571828 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl7kb\" (UniqueName: \"kubernetes.io/projected/dd47f21b-7802-4484-95f4-c7254be818eb-kube-api-access-rl7kb\") pod \"ceilometer-0\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " pod="openstack/ceilometer-0" Feb 01 14:39:09 crc kubenswrapper[4820]: I0201 14:39:09.870449 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:39:10 crc kubenswrapper[4820]: I0201 14:39:10.315472 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:11 crc kubenswrapper[4820]: I0201 14:39:11.029358 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:11 crc kubenswrapper[4820]: I0201 14:39:11.185223 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerStarted","Data":"3f11707e9682c69ee48fab869e880a5e17a5a8c9bb65ff17ce44e5e8e3d58940"} Feb 01 14:39:11 crc kubenswrapper[4820]: I0201 14:39:11.216006 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3e8287-0a4a-44fe-8b72-41276a73ad66" path="/var/lib/kubelet/pods/3d3e8287-0a4a-44fe-8b72-41276a73ad66/volumes" Feb 01 14:39:12 crc kubenswrapper[4820]: I0201 14:39:12.193785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerStarted","Data":"87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5"} Feb 01 14:39:14 crc kubenswrapper[4820]: E0201 14:39:14.198068 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8255e6c_59ea_4449_bcec_264a12bf6d6e.slice/crio-76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c\": RecentStats: unable to find data in memory cache]" Feb 01 14:39:16 crc kubenswrapper[4820]: I0201 14:39:16.230840 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerStarted","Data":"5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf"} Feb 01 14:39:16 crc kubenswrapper[4820]: I0201 14:39:16.233773 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" event={"ID":"56ebbdff-635e-4998-b84f-04dbe869ab4e","Type":"ContainerStarted","Data":"93c87a737dbd9d33db0867aca8f0653957e885cdff0417cf6174889b4daaaf03"} Feb 01 14:39:16 crc kubenswrapper[4820]: I0201 14:39:16.255351 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" podStartSLOduration=2.185530719 podStartE2EDuration="9.255335227s" podCreationTimestamp="2026-02-01 14:39:07 +0000 UTC" firstStartedPulling="2026-02-01 14:39:08.726446139 +0000 UTC m=+1090.246812423" lastFinishedPulling="2026-02-01 14:39:15.796250647 +0000 UTC m=+1097.316616931" observedRunningTime="2026-02-01 14:39:16.25135763 +0000 UTC m=+1097.771723934" watchObservedRunningTime="2026-02-01 14:39:16.255335227 +0000 UTC m=+1097.775701501" Feb 01 14:39:17 crc kubenswrapper[4820]: I0201 14:39:17.243290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerStarted","Data":"bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62"} Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.242347 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.242975 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.260466 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerStarted","Data":"dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928"} Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.260673 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.260663 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="ceilometer-central-agent" containerID="cri-o://87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5" gracePeriod=30 Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.260774 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="proxy-httpd" containerID="cri-o://dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928" gracePeriod=30 Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.260819 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="sg-core" containerID="cri-o://bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62" gracePeriod=30 Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.260857 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="ceilometer-notification-agent" containerID="cri-o://5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf" gracePeriod=30 Feb 01 14:39:19 crc kubenswrapper[4820]: I0201 14:39:19.290634 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.61121933 podStartE2EDuration="10.290616668s" podCreationTimestamp="2026-02-01 14:39:09 +0000 UTC" firstStartedPulling="2026-02-01 14:39:10.325173064 +0000 UTC m=+1091.845539348" lastFinishedPulling="2026-02-01 14:39:19.004570392 +0000 UTC m=+1100.524936686" observedRunningTime="2026-02-01 14:39:19.280213255 +0000 UTC m=+1100.800579529" watchObservedRunningTime="2026-02-01 14:39:19.290616668 +0000 UTC m=+1100.810982952" Feb 01 14:39:20 crc kubenswrapper[4820]: I0201 14:39:20.270447 4820 generic.go:334] "Generic (PLEG): container finished" podID="dd47f21b-7802-4484-95f4-c7254be818eb" containerID="bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62" exitCode=2 Feb 01 14:39:20 crc kubenswrapper[4820]: I0201 14:39:20.270866 4820 generic.go:334] "Generic (PLEG): container finished" podID="dd47f21b-7802-4484-95f4-c7254be818eb" containerID="5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf" exitCode=0 Feb 01 14:39:20 crc kubenswrapper[4820]: I0201 14:39:20.270913 4820 generic.go:334] "Generic (PLEG): container finished" podID="dd47f21b-7802-4484-95f4-c7254be818eb" containerID="87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5" exitCode=0 Feb 01 14:39:20 crc kubenswrapper[4820]: I0201 14:39:20.270628 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerDied","Data":"bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62"} Feb 01 14:39:20 crc kubenswrapper[4820]: I0201 14:39:20.270956 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerDied","Data":"5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf"} Feb 01 14:39:20 crc kubenswrapper[4820]: I0201 14:39:20.270976 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerDied","Data":"87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5"} Feb 01 14:39:24 crc kubenswrapper[4820]: E0201 14:39:24.414541 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8255e6c_59ea_4449_bcec_264a12bf6d6e.slice/crio-76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c\": RecentStats: unable to find data in memory cache]" Feb 01 14:39:26 crc kubenswrapper[4820]: I0201 14:39:26.327672 4820 generic.go:334] "Generic (PLEG): container finished" podID="56ebbdff-635e-4998-b84f-04dbe869ab4e" containerID="93c87a737dbd9d33db0867aca8f0653957e885cdff0417cf6174889b4daaaf03" exitCode=0 Feb 01 14:39:26 crc kubenswrapper[4820]: I0201 14:39:26.327738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" event={"ID":"56ebbdff-635e-4998-b84f-04dbe869ab4e","Type":"ContainerDied","Data":"93c87a737dbd9d33db0867aca8f0653957e885cdff0417cf6174889b4daaaf03"} Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.715231 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.839757 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-combined-ca-bundle\") pod \"56ebbdff-635e-4998-b84f-04dbe869ab4e\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.839983 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4wh\" (UniqueName: \"kubernetes.io/projected/56ebbdff-635e-4998-b84f-04dbe869ab4e-kube-api-access-nb4wh\") pod \"56ebbdff-635e-4998-b84f-04dbe869ab4e\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.840052 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-config-data\") pod \"56ebbdff-635e-4998-b84f-04dbe869ab4e\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.840139 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-scripts\") pod \"56ebbdff-635e-4998-b84f-04dbe869ab4e\" (UID: \"56ebbdff-635e-4998-b84f-04dbe869ab4e\") " Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.847251 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-scripts" (OuterVolumeSpecName: "scripts") pod "56ebbdff-635e-4998-b84f-04dbe869ab4e" (UID: "56ebbdff-635e-4998-b84f-04dbe869ab4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.847532 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ebbdff-635e-4998-b84f-04dbe869ab4e-kube-api-access-nb4wh" (OuterVolumeSpecName: "kube-api-access-nb4wh") pod "56ebbdff-635e-4998-b84f-04dbe869ab4e" (UID: "56ebbdff-635e-4998-b84f-04dbe869ab4e"). InnerVolumeSpecName "kube-api-access-nb4wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.864527 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-config-data" (OuterVolumeSpecName: "config-data") pod "56ebbdff-635e-4998-b84f-04dbe869ab4e" (UID: "56ebbdff-635e-4998-b84f-04dbe869ab4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.873190 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56ebbdff-635e-4998-b84f-04dbe869ab4e" (UID: "56ebbdff-635e-4998-b84f-04dbe869ab4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.943290 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb4wh\" (UniqueName: \"kubernetes.io/projected/56ebbdff-635e-4998-b84f-04dbe869ab4e-kube-api-access-nb4wh\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.944165 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.944185 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:27 crc kubenswrapper[4820]: I0201 14:39:27.944201 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ebbdff-635e-4998-b84f-04dbe869ab4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.347690 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" event={"ID":"56ebbdff-635e-4998-b84f-04dbe869ab4e","Type":"ContainerDied","Data":"e5dc69bb31b6121dbf395cdd71b655606add04e3da11fd1fd83d40437f9be4cb"} Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.347740 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5dc69bb31b6121dbf395cdd71b655606add04e3da11fd1fd83d40437f9be4cb" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.348358 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qqtrm" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.487988 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 01 14:39:28 crc kubenswrapper[4820]: E0201 14:39:28.488331 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ebbdff-635e-4998-b84f-04dbe869ab4e" containerName="nova-cell0-conductor-db-sync" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.488348 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ebbdff-635e-4998-b84f-04dbe869ab4e" containerName="nova-cell0-conductor-db-sync" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.488519 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ebbdff-635e-4998-b84f-04dbe869ab4e" containerName="nova-cell0-conductor-db-sync" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.489145 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.491456 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.492414 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8fs5g" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.501662 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.657679 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2655f84-261a-48ba-b019-d2e11ed4e80e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.657764 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2655f84-261a-48ba-b019-d2e11ed4e80e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.658121 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwdw\" (UniqueName: \"kubernetes.io/projected/f2655f84-261a-48ba-b019-d2e11ed4e80e-kube-api-access-8hwdw\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.759866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2655f84-261a-48ba-b019-d2e11ed4e80e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.760749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwdw\" (UniqueName: \"kubernetes.io/projected/f2655f84-261a-48ba-b019-d2e11ed4e80e-kube-api-access-8hwdw\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.760839 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2655f84-261a-48ba-b019-d2e11ed4e80e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.766744 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2655f84-261a-48ba-b019-d2e11ed4e80e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.766778 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2655f84-261a-48ba-b019-d2e11ed4e80e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.777292 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwdw\" (UniqueName: \"kubernetes.io/projected/f2655f84-261a-48ba-b019-d2e11ed4e80e-kube-api-access-8hwdw\") pod \"nova-cell0-conductor-0\" (UID: \"f2655f84-261a-48ba-b019-d2e11ed4e80e\") " pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:28 crc kubenswrapper[4820]: I0201 14:39:28.805417 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:29 crc kubenswrapper[4820]: I0201 14:39:29.254913 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 01 14:39:29 crc kubenswrapper[4820]: W0201 14:39:29.260458 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2655f84_261a_48ba_b019_d2e11ed4e80e.slice/crio-90c19cf6702c94ed54c2ed009a9e93524d37d501be539d538cb33ec47edaf4ef WatchSource:0}: Error finding container 90c19cf6702c94ed54c2ed009a9e93524d37d501be539d538cb33ec47edaf4ef: Status 404 returned error can't find the container with id 90c19cf6702c94ed54c2ed009a9e93524d37d501be539d538cb33ec47edaf4ef Feb 01 14:39:29 crc kubenswrapper[4820]: I0201 14:39:29.356967 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f2655f84-261a-48ba-b019-d2e11ed4e80e","Type":"ContainerStarted","Data":"90c19cf6702c94ed54c2ed009a9e93524d37d501be539d538cb33ec47edaf4ef"} Feb 01 14:39:30 crc kubenswrapper[4820]: I0201 14:39:30.377861 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f2655f84-261a-48ba-b019-d2e11ed4e80e","Type":"ContainerStarted","Data":"07cf5fabbf484bd28de9b1d2f7adc4c97d766885f9f888aa19a48135df55ae6b"} Feb 01 14:39:30 crc kubenswrapper[4820]: I0201 14:39:30.378132 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:30 crc kubenswrapper[4820]: I0201 14:39:30.397732 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.397715845 podStartE2EDuration="2.397715845s" podCreationTimestamp="2026-02-01 14:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:30.396018554 +0000 UTC m=+1111.916384838" watchObservedRunningTime="2026-02-01 14:39:30.397715845 +0000 UTC m=+1111.918082129" Feb 01 14:39:34 crc kubenswrapper[4820]: E0201 14:39:34.621235 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8255e6c_59ea_4449_bcec_264a12bf6d6e.slice/crio-76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c\": RecentStats: unable to find data in memory cache]" Feb 01 14:39:38 crc kubenswrapper[4820]: I0201 14:39:38.841032 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.295704 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-sg5pc"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.296689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.298717 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.298891 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.309338 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sg5pc"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.452466 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.452534 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdbq\" (UniqueName: \"kubernetes.io/projected/df439e0e-3443-4c9f-b049-8a36a7e38d86-kube-api-access-djdbq\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.452563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-config-data\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.452650 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-scripts\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.509239 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.510678 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.512998 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.528487 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.555896 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdbq\" (UniqueName: \"kubernetes.io/projected/df439e0e-3443-4c9f-b049-8a36a7e38d86-kube-api-access-djdbq\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.555938 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-config-data\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.556006 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-scripts\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.556089 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.564468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.566452 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-config-data\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.566867 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-scripts\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.573913 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.575293 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.588631 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.597916 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdbq\" (UniqueName: \"kubernetes.io/projected/df439e0e-3443-4c9f-b049-8a36a7e38d86-kube-api-access-djdbq\") pod \"nova-cell0-cell-mapping-sg5pc\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.619230 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.627959 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.657192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-config-data\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.657254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.657287 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-config-data\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.657373 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae750ad-27c5-4154-b0b2-ce81412b109a-logs\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.657410 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqx8c\" (UniqueName: \"kubernetes.io/projected/9ae750ad-27c5-4154-b0b2-ce81412b109a-kube-api-access-gqx8c\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.657439 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.657489 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvx2j\" (UniqueName: \"kubernetes.io/projected/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-kube-api-access-xvx2j\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.657512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-logs\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.668551 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-ltqrv"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.670722 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.705164 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-ltqrv"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.721582 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.722668 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.736138 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.756471 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.757836 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760075 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760121 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae750ad-27c5-4154-b0b2-ce81412b109a-logs\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqx8c\" (UniqueName: \"kubernetes.io/projected/9ae750ad-27c5-4154-b0b2-ce81412b109a-kube-api-access-gqx8c\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760288 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760319 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760353 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkwfx\" (UniqueName: \"kubernetes.io/projected/c4a92823-3b74-4ef8-8104-b655c13d44ee-kube-api-access-tkwfx\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760409 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-config\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvx2j\" (UniqueName: \"kubernetes.io/projected/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-kube-api-access-xvx2j\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760503 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-logs\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760531 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae750ad-27c5-4154-b0b2-ce81412b109a-logs\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760534 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-config-data\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760601 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.760633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-config-data\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.761522 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.764579 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-logs\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.768623 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.771415 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-config-data\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.773447 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.782008 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-config-data\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.782077 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.792194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvx2j\" (UniqueName: \"kubernetes.io/projected/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-kube-api-access-xvx2j\") pod \"nova-metadata-0\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.792422 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqx8c\" (UniqueName: \"kubernetes.io/projected/9ae750ad-27c5-4154-b0b2-ce81412b109a-kube-api-access-gqx8c\") pod \"nova-api-0\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.828086 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.835207 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.841314 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862510 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z8tv\" (UniqueName: \"kubernetes.io/projected/f5d316bc-edb0-4779-836c-ee3368e84b91-kube-api-access-9z8tv\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862595 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862632 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862776 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkwfx\" (UniqueName: \"kubernetes.io/projected/c4a92823-3b74-4ef8-8104-b655c13d44ee-kube-api-access-tkwfx\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862802 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbmc\" (UniqueName: \"kubernetes.io/projected/a951a7fb-4de6-4238-913a-2b072052bd9e-kube-api-access-ztbmc\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862894 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862930 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-config\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.862984 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-config-data\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.863005 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.863028 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.864793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.865584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.865861 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-config\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.868333 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.883403 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkwfx\" (UniqueName: \"kubernetes.io/projected/c4a92823-3b74-4ef8-8104-b655c13d44ee-kube-api-access-tkwfx\") pod \"dnsmasq-dns-8b8cf6657-ltqrv\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.902535 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.964052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbmc\" (UniqueName: \"kubernetes.io/projected/a951a7fb-4de6-4238-913a-2b072052bd9e-kube-api-access-ztbmc\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.964151 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-config-data\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.964173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.964200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.964239 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z8tv\" (UniqueName: \"kubernetes.io/projected/f5d316bc-edb0-4779-836c-ee3368e84b91-kube-api-access-9z8tv\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.964282 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.967815 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.967893 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.967897 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-config-data\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.969545 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.981710 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbmc\" (UniqueName: \"kubernetes.io/projected/a951a7fb-4de6-4238-913a-2b072052bd9e-kube-api-access-ztbmc\") pod \"nova-scheduler-0\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:39 crc kubenswrapper[4820]: I0201 14:39:39.982555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z8tv\" (UniqueName: \"kubernetes.io/projected/f5d316bc-edb0-4779-836c-ee3368e84b91-kube-api-access-9z8tv\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.153236 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.161716 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.197320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.200145 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sg5pc"] Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.382573 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s2kcz"] Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.388110 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.391345 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.395545 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.397158 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s2kcz"] Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.407195 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:40 crc kubenswrapper[4820]: W0201 14:39:40.416266 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod145c7c6b_d611_4dc2_99ba_c6b2cfa7d813.slice/crio-f9148298132c7af803b375c76f5dddd39e3a47ef619a30b7aed2483a55734a65 WatchSource:0}: Error finding container f9148298132c7af803b375c76f5dddd39e3a47ef619a30b7aed2483a55734a65: Status 404 returned error can't find the container with id f9148298132c7af803b375c76f5dddd39e3a47ef619a30b7aed2483a55734a65 Feb 01 14:39:40 crc kubenswrapper[4820]: W0201 14:39:40.469166 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ae750ad_27c5_4154_b0b2_ce81412b109a.slice/crio-e644a9caa3e3bb80e5af498ba1edc63e601f4c8a78fe029c0eedfa45f83dbba3 WatchSource:0}: Error finding container e644a9caa3e3bb80e5af498ba1edc63e601f4c8a78fe029c0eedfa45f83dbba3: Status 404 returned error can't find the container with id e644a9caa3e3bb80e5af498ba1edc63e601f4c8a78fe029c0eedfa45f83dbba3 Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.470511 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.470721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813","Type":"ContainerStarted","Data":"f9148298132c7af803b375c76f5dddd39e3a47ef619a30b7aed2483a55734a65"} Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.473077 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sg5pc" event={"ID":"df439e0e-3443-4c9f-b049-8a36a7e38d86","Type":"ContainerStarted","Data":"e3e1bcdfc75a82646e70fe5e1cbd50ff802c30cbe1ed9d4c17b3791c81b7093b"} Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.473261 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jkcc\" (UniqueName: \"kubernetes.io/projected/61fdf904-8a91-45c5-8f1a-0fd56291b77e-kube-api-access-4jkcc\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.473301 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-scripts\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.473358 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-config-data\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.473424 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.574446 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-config-data\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.574533 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.574591 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jkcc\" (UniqueName: \"kubernetes.io/projected/61fdf904-8a91-45c5-8f1a-0fd56291b77e-kube-api-access-4jkcc\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.574615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-scripts\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.580170 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-scripts\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.582472 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-config-data\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.592424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.592595 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jkcc\" (UniqueName: \"kubernetes.io/projected/61fdf904-8a91-45c5-8f1a-0fd56291b77e-kube-api-access-4jkcc\") pod \"nova-cell1-conductor-db-sync-s2kcz\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.706428 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-ltqrv"] Feb 01 14:39:40 crc kubenswrapper[4820]: W0201 14:39:40.711398 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4a92823_3b74_4ef8_8104_b655c13d44ee.slice/crio-884e14265ec592521c9cf96bf0e0633533866ad54bc8d03296ba1d4831d18e42 WatchSource:0}: Error finding container 884e14265ec592521c9cf96bf0e0633533866ad54bc8d03296ba1d4831d18e42: Status 404 returned error can't find the container with id 884e14265ec592521c9cf96bf0e0633533866ad54bc8d03296ba1d4831d18e42 Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.726646 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.805742 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:40 crc kubenswrapper[4820]: I0201 14:39:40.815640 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.227268 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s2kcz"] Feb 01 14:39:41 crc kubenswrapper[4820]: W0201 14:39:41.233452 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61fdf904_8a91_45c5_8f1a_0fd56291b77e.slice/crio-a9a9d89782991f5c298ca825d1e900001d880dd13dad4af7ec5eca3f6d5fa5dd WatchSource:0}: Error finding container a9a9d89782991f5c298ca825d1e900001d880dd13dad4af7ec5eca3f6d5fa5dd: Status 404 returned error can't find the container with id a9a9d89782991f5c298ca825d1e900001d880dd13dad4af7ec5eca3f6d5fa5dd Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.486729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ae750ad-27c5-4154-b0b2-ce81412b109a","Type":"ContainerStarted","Data":"e644a9caa3e3bb80e5af498ba1edc63e601f4c8a78fe029c0eedfa45f83dbba3"} Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.489241 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" event={"ID":"61fdf904-8a91-45c5-8f1a-0fd56291b77e","Type":"ContainerStarted","Data":"9391c88c3cfcefdabd8aaeea23209970dc3056fa71411160501848a498eb0f0b"} Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.489273 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" event={"ID":"61fdf904-8a91-45c5-8f1a-0fd56291b77e","Type":"ContainerStarted","Data":"a9a9d89782991f5c298ca825d1e900001d880dd13dad4af7ec5eca3f6d5fa5dd"} Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.495600 4820 generic.go:334] "Generic (PLEG): container finished" podID="c4a92823-3b74-4ef8-8104-b655c13d44ee" containerID="7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b" exitCode=0 Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.495671 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" event={"ID":"c4a92823-3b74-4ef8-8104-b655c13d44ee","Type":"ContainerDied","Data":"7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b"} Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.495697 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" event={"ID":"c4a92823-3b74-4ef8-8104-b655c13d44ee","Type":"ContainerStarted","Data":"884e14265ec592521c9cf96bf0e0633533866ad54bc8d03296ba1d4831d18e42"} Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.503197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a951a7fb-4de6-4238-913a-2b072052bd9e","Type":"ContainerStarted","Data":"63708ee9eef9bc94b041cf70526ce066c79573215bafe7945f14dd62154808ea"} Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.508481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sg5pc" event={"ID":"df439e0e-3443-4c9f-b049-8a36a7e38d86","Type":"ContainerStarted","Data":"daf9089a3b95eeef2a674a610ec81c6509e38e44776d6f1e9bed566660967141"} Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.515138 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" podStartSLOduration=1.515117493 podStartE2EDuration="1.515117493s" podCreationTimestamp="2026-02-01 14:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:41.507513088 +0000 UTC m=+1123.027879392" watchObservedRunningTime="2026-02-01 14:39:41.515117493 +0000 UTC m=+1123.035483777" Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.516853 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f5d316bc-edb0-4779-836c-ee3368e84b91","Type":"ContainerStarted","Data":"cde99c3c2ad6b24e1e7a78fea6b13001ad5b2fef2a24d9e012dd25d240367b81"} Feb 01 14:39:41 crc kubenswrapper[4820]: I0201 14:39:41.550950 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-sg5pc" podStartSLOduration=2.55093173 podStartE2EDuration="2.55093173s" podCreationTimestamp="2026-02-01 14:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:41.540782971 +0000 UTC m=+1123.061149255" watchObservedRunningTime="2026-02-01 14:39:41.55093173 +0000 UTC m=+1123.071298014" Feb 01 14:39:42 crc kubenswrapper[4820]: I0201 14:39:42.538536 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" event={"ID":"c4a92823-3b74-4ef8-8104-b655c13d44ee","Type":"ContainerStarted","Data":"5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a"} Feb 01 14:39:42 crc kubenswrapper[4820]: I0201 14:39:42.539698 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:42 crc kubenswrapper[4820]: I0201 14:39:42.575503 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" podStartSLOduration=3.57548231 podStartE2EDuration="3.57548231s" podCreationTimestamp="2026-02-01 14:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:42.570919189 +0000 UTC m=+1124.091285463" watchObservedRunningTime="2026-02-01 14:39:42.57548231 +0000 UTC m=+1124.095848594" Feb 01 14:39:42 crc kubenswrapper[4820]: I0201 14:39:42.874973 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:42 crc kubenswrapper[4820]: I0201 14:39:42.884445 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.563444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f5d316bc-edb0-4779-836c-ee3368e84b91","Type":"ContainerStarted","Data":"6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57"} Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.563861 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f5d316bc-edb0-4779-836c-ee3368e84b91" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57" gracePeriod=30 Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.567797 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813","Type":"ContainerStarted","Data":"5496b451c82ed3c379c942432891520101d3f8f72a24b0f993e9d76228509bde"} Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.567841 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813","Type":"ContainerStarted","Data":"d346636cbb99da981634d21fcadeceb7446c15cc4fbc790c9e9654fa55d4c0be"} Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.568009 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerName="nova-metadata-log" containerID="cri-o://d346636cbb99da981634d21fcadeceb7446c15cc4fbc790c9e9654fa55d4c0be" gracePeriod=30 Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.568095 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerName="nova-metadata-metadata" containerID="cri-o://5496b451c82ed3c379c942432891520101d3f8f72a24b0f993e9d76228509bde" gracePeriod=30 Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.602506 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.689408066 podStartE2EDuration="5.602477712s" podCreationTimestamp="2026-02-01 14:39:39 +0000 UTC" firstStartedPulling="2026-02-01 14:39:40.822610044 +0000 UTC m=+1122.342976328" lastFinishedPulling="2026-02-01 14:39:43.73567969 +0000 UTC m=+1125.256045974" observedRunningTime="2026-02-01 14:39:44.585527518 +0000 UTC m=+1126.105893822" watchObservedRunningTime="2026-02-01 14:39:44.602477712 +0000 UTC m=+1126.122844016" Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.603225 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ae750ad-27c5-4154-b0b2-ce81412b109a","Type":"ContainerStarted","Data":"4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce"} Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.603268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ae750ad-27c5-4154-b0b2-ce81412b109a","Type":"ContainerStarted","Data":"d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766"} Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.611129 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a951a7fb-4de6-4238-913a-2b072052bd9e","Type":"ContainerStarted","Data":"9ac52252a056d71be4ee3c80e8c295ae70685a907693b1a4b148e942dd57964e"} Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.627316 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.311567015 podStartE2EDuration="5.627293399s" podCreationTimestamp="2026-02-01 14:39:39 +0000 UTC" firstStartedPulling="2026-02-01 14:39:40.423302097 +0000 UTC m=+1121.943668381" lastFinishedPulling="2026-02-01 14:39:43.739028481 +0000 UTC m=+1125.259394765" observedRunningTime="2026-02-01 14:39:44.613490452 +0000 UTC m=+1126.133856756" watchObservedRunningTime="2026-02-01 14:39:44.627293399 +0000 UTC m=+1126.147659703" Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.647555 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.729121147 podStartE2EDuration="5.647536784s" podCreationTimestamp="2026-02-01 14:39:39 +0000 UTC" firstStartedPulling="2026-02-01 14:39:40.826601292 +0000 UTC m=+1122.346967576" lastFinishedPulling="2026-02-01 14:39:43.745016919 +0000 UTC m=+1125.265383213" observedRunningTime="2026-02-01 14:39:44.639019206 +0000 UTC m=+1126.159385500" watchObservedRunningTime="2026-02-01 14:39:44.647536784 +0000 UTC m=+1126.167903078" Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.664713 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.402290115 podStartE2EDuration="5.664697515s" podCreationTimestamp="2026-02-01 14:39:39 +0000 UTC" firstStartedPulling="2026-02-01 14:39:40.471766803 +0000 UTC m=+1121.992133087" lastFinishedPulling="2026-02-01 14:39:43.734174203 +0000 UTC m=+1125.254540487" observedRunningTime="2026-02-01 14:39:44.658661936 +0000 UTC m=+1126.179028220" watchObservedRunningTime="2026-02-01 14:39:44.664697515 +0000 UTC m=+1126.185063799" Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.842336 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 01 14:39:44 crc kubenswrapper[4820]: I0201 14:39:44.842615 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 01 14:39:44 crc kubenswrapper[4820]: E0201 14:39:44.854628 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8255e6c_59ea_4449_bcec_264a12bf6d6e.slice/crio-76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c\": RecentStats: unable to find data in memory cache]" Feb 01 14:39:45 crc kubenswrapper[4820]: I0201 14:39:45.163079 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:39:45 crc kubenswrapper[4820]: I0201 14:39:45.198275 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 01 14:39:45 crc kubenswrapper[4820]: I0201 14:39:45.630234 4820 generic.go:334] "Generic (PLEG): container finished" podID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerID="5496b451c82ed3c379c942432891520101d3f8f72a24b0f993e9d76228509bde" exitCode=0 Feb 01 14:39:45 crc kubenswrapper[4820]: I0201 14:39:45.630275 4820 generic.go:334] "Generic (PLEG): container finished" podID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerID="d346636cbb99da981634d21fcadeceb7446c15cc4fbc790c9e9654fa55d4c0be" exitCode=143 Feb 01 14:39:45 crc kubenswrapper[4820]: I0201 14:39:45.630326 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813","Type":"ContainerDied","Data":"5496b451c82ed3c379c942432891520101d3f8f72a24b0f993e9d76228509bde"} Feb 01 14:39:45 crc kubenswrapper[4820]: I0201 14:39:45.630374 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813","Type":"ContainerDied","Data":"d346636cbb99da981634d21fcadeceb7446c15cc4fbc790c9e9654fa55d4c0be"} Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.326233 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.436762 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-config-data\") pod \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.436872 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-combined-ca-bundle\") pod \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.437009 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-logs\") pod \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.437032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvx2j\" (UniqueName: \"kubernetes.io/projected/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-kube-api-access-xvx2j\") pod \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\" (UID: \"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813\") " Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.437851 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-logs" (OuterVolumeSpecName: "logs") pod "145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" (UID: "145c7c6b-d611-4dc2-99ba-c6b2cfa7d813"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.442894 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-kube-api-access-xvx2j" (OuterVolumeSpecName: "kube-api-access-xvx2j") pod "145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" (UID: "145c7c6b-d611-4dc2-99ba-c6b2cfa7d813"). InnerVolumeSpecName "kube-api-access-xvx2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.475017 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-config-data" (OuterVolumeSpecName: "config-data") pod "145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" (UID: "145c7c6b-d611-4dc2-99ba-c6b2cfa7d813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.477042 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" (UID: "145c7c6b-d611-4dc2-99ba-c6b2cfa7d813"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.539518 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.539553 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.539562 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvx2j\" (UniqueName: \"kubernetes.io/projected/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-kube-api-access-xvx2j\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.539572 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.638759 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.642586 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"145c7c6b-d611-4dc2-99ba-c6b2cfa7d813","Type":"ContainerDied","Data":"f9148298132c7af803b375c76f5dddd39e3a47ef619a30b7aed2483a55734a65"} Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.642656 4820 scope.go:117] "RemoveContainer" containerID="5496b451c82ed3c379c942432891520101d3f8f72a24b0f993e9d76228509bde" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.669513 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.670896 4820 scope.go:117] "RemoveContainer" containerID="d346636cbb99da981634d21fcadeceb7446c15cc4fbc790c9e9654fa55d4c0be" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.678134 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.703952 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:46 crc kubenswrapper[4820]: E0201 14:39:46.704728 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerName="nova-metadata-log" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.704744 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerName="nova-metadata-log" Feb 01 14:39:46 crc kubenswrapper[4820]: E0201 14:39:46.704762 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerName="nova-metadata-metadata" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.704774 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerName="nova-metadata-metadata" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.705021 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerName="nova-metadata-log" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.705044 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" containerName="nova-metadata-metadata" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.706276 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.711425 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.734260 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.734658 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.743641 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-config-data\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.743686 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhr7l\" (UniqueName: \"kubernetes.io/projected/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-kube-api-access-hhr7l\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.743723 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.743756 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.743772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-logs\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.846398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-config-data\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.846452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhr7l\" (UniqueName: \"kubernetes.io/projected/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-kube-api-access-hhr7l\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.846486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.846508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.846528 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-logs\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.848337 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-logs\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.851159 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.852016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.861666 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-config-data\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:46 crc kubenswrapper[4820]: I0201 14:39:46.862051 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhr7l\" (UniqueName: \"kubernetes.io/projected/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-kube-api-access-hhr7l\") pod \"nova-metadata-0\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " pod="openstack/nova-metadata-0" Feb 01 14:39:47 crc kubenswrapper[4820]: I0201 14:39:47.062694 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:47 crc kubenswrapper[4820]: I0201 14:39:47.218339 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="145c7c6b-d611-4dc2-99ba-c6b2cfa7d813" path="/var/lib/kubelet/pods/145c7c6b-d611-4dc2-99ba-c6b2cfa7d813/volumes" Feb 01 14:39:47 crc kubenswrapper[4820]: I0201 14:39:47.529818 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:47 crc kubenswrapper[4820]: I0201 14:39:47.649938 4820 generic.go:334] "Generic (PLEG): container finished" podID="df439e0e-3443-4c9f-b049-8a36a7e38d86" containerID="daf9089a3b95eeef2a674a610ec81c6509e38e44776d6f1e9bed566660967141" exitCode=0 Feb 01 14:39:47 crc kubenswrapper[4820]: I0201 14:39:47.650009 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sg5pc" event={"ID":"df439e0e-3443-4c9f-b049-8a36a7e38d86","Type":"ContainerDied","Data":"daf9089a3b95eeef2a674a610ec81c6509e38e44776d6f1e9bed566660967141"} Feb 01 14:39:47 crc kubenswrapper[4820]: I0201 14:39:47.651216 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bc185d-fe0d-4270-a531-d60cd7b9ef95","Type":"ContainerStarted","Data":"cb60efa2309febd06eb949d2505e7d82cc2d2c4ef5063740c8a243911988b6c0"} Feb 01 14:39:48 crc kubenswrapper[4820]: I0201 14:39:48.664738 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bc185d-fe0d-4270-a531-d60cd7b9ef95","Type":"ContainerStarted","Data":"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1"} Feb 01 14:39:48 crc kubenswrapper[4820]: I0201 14:39:48.665208 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bc185d-fe0d-4270-a531-d60cd7b9ef95","Type":"ContainerStarted","Data":"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09"} Feb 01 14:39:48 crc kubenswrapper[4820]: I0201 14:39:48.696267 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.696249238 podStartE2EDuration="2.696249238s" podCreationTimestamp="2026-02-01 14:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:48.682121433 +0000 UTC m=+1130.202487727" watchObservedRunningTime="2026-02-01 14:39:48.696249238 +0000 UTC m=+1130.216615522" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.141341 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.203087 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-scripts\") pod \"df439e0e-3443-4c9f-b049-8a36a7e38d86\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.203164 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-combined-ca-bundle\") pod \"df439e0e-3443-4c9f-b049-8a36a7e38d86\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.203319 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-config-data\") pod \"df439e0e-3443-4c9f-b049-8a36a7e38d86\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.203379 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djdbq\" (UniqueName: \"kubernetes.io/projected/df439e0e-3443-4c9f-b049-8a36a7e38d86-kube-api-access-djdbq\") pod \"df439e0e-3443-4c9f-b049-8a36a7e38d86\" (UID: \"df439e0e-3443-4c9f-b049-8a36a7e38d86\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.208328 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df439e0e-3443-4c9f-b049-8a36a7e38d86-kube-api-access-djdbq" (OuterVolumeSpecName: "kube-api-access-djdbq") pod "df439e0e-3443-4c9f-b049-8a36a7e38d86" (UID: "df439e0e-3443-4c9f-b049-8a36a7e38d86"). InnerVolumeSpecName "kube-api-access-djdbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.225098 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-scripts" (OuterVolumeSpecName: "scripts") pod "df439e0e-3443-4c9f-b049-8a36a7e38d86" (UID: "df439e0e-3443-4c9f-b049-8a36a7e38d86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.242568 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.242637 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.246568 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-config-data" (OuterVolumeSpecName: "config-data") pod "df439e0e-3443-4c9f-b049-8a36a7e38d86" (UID: "df439e0e-3443-4c9f-b049-8a36a7e38d86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.247499 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df439e0e-3443-4c9f-b049-8a36a7e38d86" (UID: "df439e0e-3443-4c9f-b049-8a36a7e38d86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.305173 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.305212 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.305222 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djdbq\" (UniqueName: \"kubernetes.io/projected/df439e0e-3443-4c9f-b049-8a36a7e38d86-kube-api-access-djdbq\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.305232 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df439e0e-3443-4c9f-b049-8a36a7e38d86-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.646505 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.672945 4820 generic.go:334] "Generic (PLEG): container finished" podID="dd47f21b-7802-4484-95f4-c7254be818eb" containerID="dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928" exitCode=137 Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.673003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerDied","Data":"dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928"} Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.673031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd47f21b-7802-4484-95f4-c7254be818eb","Type":"ContainerDied","Data":"3f11707e9682c69ee48fab869e880a5e17a5a8c9bb65ff17ce44e5e8e3d58940"} Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.673047 4820 scope.go:117] "RemoveContainer" containerID="dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.673130 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.674658 4820 generic.go:334] "Generic (PLEG): container finished" podID="61fdf904-8a91-45c5-8f1a-0fd56291b77e" containerID="9391c88c3cfcefdabd8aaeea23209970dc3056fa71411160501848a498eb0f0b" exitCode=0 Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.674691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" event={"ID":"61fdf904-8a91-45c5-8f1a-0fd56291b77e","Type":"ContainerDied","Data":"9391c88c3cfcefdabd8aaeea23209970dc3056fa71411160501848a498eb0f0b"} Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.678154 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sg5pc" event={"ID":"df439e0e-3443-4c9f-b049-8a36a7e38d86","Type":"ContainerDied","Data":"e3e1bcdfc75a82646e70fe5e1cbd50ff802c30cbe1ed9d4c17b3791c81b7093b"} Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.678196 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sg5pc" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.678200 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3e1bcdfc75a82646e70fe5e1cbd50ff802c30cbe1ed9d4c17b3791c81b7093b" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.717139 4820 scope.go:117] "RemoveContainer" containerID="bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.721343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-combined-ca-bundle\") pod \"dd47f21b-7802-4484-95f4-c7254be818eb\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.721442 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-scripts\") pod \"dd47f21b-7802-4484-95f4-c7254be818eb\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.721479 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-run-httpd\") pod \"dd47f21b-7802-4484-95f4-c7254be818eb\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.721536 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl7kb\" (UniqueName: \"kubernetes.io/projected/dd47f21b-7802-4484-95f4-c7254be818eb-kube-api-access-rl7kb\") pod \"dd47f21b-7802-4484-95f4-c7254be818eb\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.721620 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-config-data\") pod \"dd47f21b-7802-4484-95f4-c7254be818eb\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.721669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-log-httpd\") pod \"dd47f21b-7802-4484-95f4-c7254be818eb\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.721738 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-sg-core-conf-yaml\") pod \"dd47f21b-7802-4484-95f4-c7254be818eb\" (UID: \"dd47f21b-7802-4484-95f4-c7254be818eb\") " Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.724099 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd47f21b-7802-4484-95f4-c7254be818eb" (UID: "dd47f21b-7802-4484-95f4-c7254be818eb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.725090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd47f21b-7802-4484-95f4-c7254be818eb" (UID: "dd47f21b-7802-4484-95f4-c7254be818eb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.737693 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-scripts" (OuterVolumeSpecName: "scripts") pod "dd47f21b-7802-4484-95f4-c7254be818eb" (UID: "dd47f21b-7802-4484-95f4-c7254be818eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.740720 4820 scope.go:117] "RemoveContainer" containerID="5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.748627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd47f21b-7802-4484-95f4-c7254be818eb-kube-api-access-rl7kb" (OuterVolumeSpecName: "kube-api-access-rl7kb") pod "dd47f21b-7802-4484-95f4-c7254be818eb" (UID: "dd47f21b-7802-4484-95f4-c7254be818eb"). InnerVolumeSpecName "kube-api-access-rl7kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.756207 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd47f21b-7802-4484-95f4-c7254be818eb" (UID: "dd47f21b-7802-4484-95f4-c7254be818eb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.778200 4820 scope.go:117] "RemoveContainer" containerID="87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.796706 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd47f21b-7802-4484-95f4-c7254be818eb" (UID: "dd47f21b-7802-4484-95f4-c7254be818eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.796916 4820 scope.go:117] "RemoveContainer" containerID="dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928" Feb 01 14:39:49 crc kubenswrapper[4820]: E0201 14:39:49.797223 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928\": container with ID starting with dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928 not found: ID does not exist" containerID="dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.797256 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928"} err="failed to get container status \"dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928\": rpc error: code = NotFound desc = could not find container \"dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928\": container with ID starting with dfda6fddf537423c0909fbc4548380acf7c73c0ff08d0ca3e73f0b8a45068928 not found: ID does not exist" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.797282 4820 scope.go:117] "RemoveContainer" containerID="bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62" Feb 01 14:39:49 crc kubenswrapper[4820]: E0201 14:39:49.797717 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62\": container with ID starting with bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62 not found: ID does not exist" containerID="bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.797771 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62"} err="failed to get container status \"bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62\": rpc error: code = NotFound desc = could not find container \"bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62\": container with ID starting with bbf38f28c05cd0aed0bc3e09b7ce44f1632acba61681e4fa54ac99b127603b62 not found: ID does not exist" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.797810 4820 scope.go:117] "RemoveContainer" containerID="5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf" Feb 01 14:39:49 crc kubenswrapper[4820]: E0201 14:39:49.798648 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf\": container with ID starting with 5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf not found: ID does not exist" containerID="5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.798679 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf"} err="failed to get container status \"5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf\": rpc error: code = NotFound desc = could not find container \"5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf\": container with ID starting with 5274c263491d6cfdf5966d5978580d5c1f9491389731753746f387b20ba9ecbf not found: ID does not exist" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.798698 4820 scope.go:117] "RemoveContainer" containerID="87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5" Feb 01 14:39:49 crc kubenswrapper[4820]: E0201 14:39:49.799322 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5\": container with ID starting with 87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5 not found: ID does not exist" containerID="87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.799348 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5"} err="failed to get container status \"87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5\": rpc error: code = NotFound desc = could not find container \"87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5\": container with ID starting with 87d7f5057ce0aeeb706b500162b1cd4e81c04acfbd29ccb29affa6779e3debd5 not found: ID does not exist" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.826153 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.826182 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.826193 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.826202 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.826210 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd47f21b-7802-4484-95f4-c7254be818eb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.826219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl7kb\" (UniqueName: \"kubernetes.io/projected/dd47f21b-7802-4484-95f4-c7254be818eb-kube-api-access-rl7kb\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.836472 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.836523 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.847600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-config-data" (OuterVolumeSpecName: "config-data") pod "dd47f21b-7802-4484-95f4-c7254be818eb" (UID: "dd47f21b-7802-4484-95f4-c7254be818eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.854830 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.863447 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.863638 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a951a7fb-4de6-4238-913a-2b072052bd9e" containerName="nova-scheduler-scheduler" containerID="cri-o://9ac52252a056d71be4ee3c80e8c295ae70685a907693b1a4b148e942dd57964e" gracePeriod=30 Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.873887 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:49 crc kubenswrapper[4820]: I0201 14:39:49.928394 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd47f21b-7802-4484-95f4-c7254be818eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.007140 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.015632 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.029141 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:50 crc kubenswrapper[4820]: E0201 14:39:50.029640 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="ceilometer-central-agent" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.029738 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="ceilometer-central-agent" Feb 01 14:39:50 crc kubenswrapper[4820]: E0201 14:39:50.029797 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="sg-core" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.029849 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="sg-core" Feb 01 14:39:50 crc kubenswrapper[4820]: E0201 14:39:50.029930 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df439e0e-3443-4c9f-b049-8a36a7e38d86" containerName="nova-manage" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.029981 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="df439e0e-3443-4c9f-b049-8a36a7e38d86" containerName="nova-manage" Feb 01 14:39:50 crc kubenswrapper[4820]: E0201 14:39:50.030034 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="proxy-httpd" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.035208 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="proxy-httpd" Feb 01 14:39:50 crc kubenswrapper[4820]: E0201 14:39:50.035381 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="ceilometer-notification-agent" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.035444 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="ceilometer-notification-agent" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.035777 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="ceilometer-central-agent" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.035853 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="ceilometer-notification-agent" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.035920 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="sg-core" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.035977 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="df439e0e-3443-4c9f-b049-8a36a7e38d86" containerName="nova-manage" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.036116 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" containerName="proxy-httpd" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.037654 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.044389 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.073618 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.073811 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.131259 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.131363 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-config-data\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.131460 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-log-httpd\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.131578 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.131627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-scripts\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.131812 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnh2w\" (UniqueName: \"kubernetes.io/projected/36042293-d59a-4aab-b851-e06233b41191-kube-api-access-dnh2w\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.131913 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-run-httpd\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.156126 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.222127 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-k2nl7"] Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.222354 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" podUID="495c2732-0847-4e56-a609-1a24244a4969" containerName="dnsmasq-dns" containerID="cri-o://cab89cf77f7942aab7c3d7b2fcc3067eb40dfd940b9c01e9d394c5d223cce512" gracePeriod=10 Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.233143 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-log-httpd\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.233809 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.233826 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-scripts\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.233755 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-log-httpd\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.234544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnh2w\" (UniqueName: \"kubernetes.io/projected/36042293-d59a-4aab-b851-e06233b41191-kube-api-access-dnh2w\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.234572 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-run-httpd\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.234606 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.234646 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-config-data\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.235948 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-run-httpd\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.238859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-scripts\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.241373 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.242723 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-config-data\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.249587 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.256523 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnh2w\" (UniqueName: \"kubernetes.io/projected/36042293-d59a-4aab-b851-e06233b41191-kube-api-access-dnh2w\") pod \"ceilometer-0\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.387289 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.688671 4820 generic.go:334] "Generic (PLEG): container finished" podID="495c2732-0847-4e56-a609-1a24244a4969" containerID="cab89cf77f7942aab7c3d7b2fcc3067eb40dfd940b9c01e9d394c5d223cce512" exitCode=0 Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.688754 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" event={"ID":"495c2732-0847-4e56-a609-1a24244a4969","Type":"ContainerDied","Data":"cab89cf77f7942aab7c3d7b2fcc3067eb40dfd940b9c01e9d394c5d223cce512"} Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.688787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" event={"ID":"495c2732-0847-4e56-a609-1a24244a4969","Type":"ContainerDied","Data":"f158d262e4cea0bdca725da6e98276bde2f477fcb9a80faa95e073d7c1259a02"} Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.688803 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f158d262e4cea0bdca725da6e98276bde2f477fcb9a80faa95e073d7c1259a02" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.691242 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerName="nova-metadata-log" containerID="cri-o://57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09" gracePeriod=30 Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.691334 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerName="nova-metadata-metadata" containerID="cri-o://d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1" gracePeriod=30 Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.691563 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-log" containerID="cri-o://d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766" gracePeriod=30 Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.691668 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-api" containerID="cri-o://4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce" gracePeriod=30 Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.700817 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": EOF" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.701016 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": EOF" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.734785 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.850091 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-config\") pod \"495c2732-0847-4e56-a609-1a24244a4969\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.850171 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-sb\") pod \"495c2732-0847-4e56-a609-1a24244a4969\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.850249 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt6t5\" (UniqueName: \"kubernetes.io/projected/495c2732-0847-4e56-a609-1a24244a4969-kube-api-access-zt6t5\") pod \"495c2732-0847-4e56-a609-1a24244a4969\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.850326 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-nb\") pod \"495c2732-0847-4e56-a609-1a24244a4969\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.850439 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-dns-svc\") pod \"495c2732-0847-4e56-a609-1a24244a4969\" (UID: \"495c2732-0847-4e56-a609-1a24244a4969\") " Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.854805 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495c2732-0847-4e56-a609-1a24244a4969-kube-api-access-zt6t5" (OuterVolumeSpecName: "kube-api-access-zt6t5") pod "495c2732-0847-4e56-a609-1a24244a4969" (UID: "495c2732-0847-4e56-a609-1a24244a4969"). InnerVolumeSpecName "kube-api-access-zt6t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.909013 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "495c2732-0847-4e56-a609-1a24244a4969" (UID: "495c2732-0847-4e56-a609-1a24244a4969"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:50 crc kubenswrapper[4820]: W0201 14:39:50.918397 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36042293_d59a_4aab_b851_e06233b41191.slice/crio-faf5197cac02cb6f2f7706a1184a08de7bc1a2d9c5c12be4ec4fd79a277c661d WatchSource:0}: Error finding container faf5197cac02cb6f2f7706a1184a08de7bc1a2d9c5c12be4ec4fd79a277c661d: Status 404 returned error can't find the container with id faf5197cac02cb6f2f7706a1184a08de7bc1a2d9c5c12be4ec4fd79a277c661d Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.923184 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.927650 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-config" (OuterVolumeSpecName: "config") pod "495c2732-0847-4e56-a609-1a24244a4969" (UID: "495c2732-0847-4e56-a609-1a24244a4969"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.928037 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "495c2732-0847-4e56-a609-1a24244a4969" (UID: "495c2732-0847-4e56-a609-1a24244a4969"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.932471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "495c2732-0847-4e56-a609-1a24244a4969" (UID: "495c2732-0847-4e56-a609-1a24244a4969"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.953638 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt6t5\" (UniqueName: \"kubernetes.io/projected/495c2732-0847-4e56-a609-1a24244a4969-kube-api-access-zt6t5\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.953668 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.953679 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.953688 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:50 crc kubenswrapper[4820]: I0201 14:39:50.953697 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/495c2732-0847-4e56-a609-1a24244a4969-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.041642 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.159777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-scripts\") pod \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.159902 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-combined-ca-bundle\") pod \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.160023 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jkcc\" (UniqueName: \"kubernetes.io/projected/61fdf904-8a91-45c5-8f1a-0fd56291b77e-kube-api-access-4jkcc\") pod \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.160117 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-config-data\") pod \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\" (UID: \"61fdf904-8a91-45c5-8f1a-0fd56291b77e\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.167539 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-scripts" (OuterVolumeSpecName: "scripts") pod "61fdf904-8a91-45c5-8f1a-0fd56291b77e" (UID: "61fdf904-8a91-45c5-8f1a-0fd56291b77e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.175048 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fdf904-8a91-45c5-8f1a-0fd56291b77e-kube-api-access-4jkcc" (OuterVolumeSpecName: "kube-api-access-4jkcc") pod "61fdf904-8a91-45c5-8f1a-0fd56291b77e" (UID: "61fdf904-8a91-45c5-8f1a-0fd56291b77e"). InnerVolumeSpecName "kube-api-access-4jkcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.189978 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-config-data" (OuterVolumeSpecName: "config-data") pod "61fdf904-8a91-45c5-8f1a-0fd56291b77e" (UID: "61fdf904-8a91-45c5-8f1a-0fd56291b77e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.192994 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61fdf904-8a91-45c5-8f1a-0fd56291b77e" (UID: "61fdf904-8a91-45c5-8f1a-0fd56291b77e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.215906 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd47f21b-7802-4484-95f4-c7254be818eb" path="/var/lib/kubelet/pods/dd47f21b-7802-4484-95f4-c7254be818eb/volumes" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.227813 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.261988 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jkcc\" (UniqueName: \"kubernetes.io/projected/61fdf904-8a91-45c5-8f1a-0fd56291b77e-kube-api-access-4jkcc\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.262022 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.262031 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.262041 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fdf904-8a91-45c5-8f1a-0fd56291b77e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.363244 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-logs\") pod \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.363516 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-logs" (OuterVolumeSpecName: "logs") pod "f0bc185d-fe0d-4270-a531-d60cd7b9ef95" (UID: "f0bc185d-fe0d-4270-a531-d60cd7b9ef95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.363811 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-config-data\") pod \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.363912 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhr7l\" (UniqueName: \"kubernetes.io/projected/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-kube-api-access-hhr7l\") pod \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.364024 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-nova-metadata-tls-certs\") pod \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.364565 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-combined-ca-bundle\") pod \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\" (UID: \"f0bc185d-fe0d-4270-a531-d60cd7b9ef95\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.365467 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.368981 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-kube-api-access-hhr7l" (OuterVolumeSpecName: "kube-api-access-hhr7l") pod "f0bc185d-fe0d-4270-a531-d60cd7b9ef95" (UID: "f0bc185d-fe0d-4270-a531-d60cd7b9ef95"). InnerVolumeSpecName "kube-api-access-hhr7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.418588 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f0bc185d-fe0d-4270-a531-d60cd7b9ef95" (UID: "f0bc185d-fe0d-4270-a531-d60cd7b9ef95"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.427022 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0bc185d-fe0d-4270-a531-d60cd7b9ef95" (UID: "f0bc185d-fe0d-4270-a531-d60cd7b9ef95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.431606 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-config-data" (OuterVolumeSpecName: "config-data") pod "f0bc185d-fe0d-4270-a531-d60cd7b9ef95" (UID: "f0bc185d-fe0d-4270-a531-d60cd7b9ef95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.467650 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.467694 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhr7l\" (UniqueName: \"kubernetes.io/projected/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-kube-api-access-hhr7l\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.467708 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.467718 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bc185d-fe0d-4270-a531-d60cd7b9ef95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.700820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerStarted","Data":"faf5197cac02cb6f2f7706a1184a08de7bc1a2d9c5c12be4ec4fd79a277c661d"} Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.703528 4820 generic.go:334] "Generic (PLEG): container finished" podID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerID="d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766" exitCode=143 Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.703585 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ae750ad-27c5-4154-b0b2-ce81412b109a","Type":"ContainerDied","Data":"d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766"} Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.705011 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" event={"ID":"61fdf904-8a91-45c5-8f1a-0fd56291b77e","Type":"ContainerDied","Data":"a9a9d89782991f5c298ca825d1e900001d880dd13dad4af7ec5eca3f6d5fa5dd"} Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.705045 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9a9d89782991f5c298ca825d1e900001d880dd13dad4af7ec5eca3f6d5fa5dd" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.705118 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s2kcz" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.710693 4820 generic.go:334] "Generic (PLEG): container finished" podID="a951a7fb-4de6-4238-913a-2b072052bd9e" containerID="9ac52252a056d71be4ee3c80e8c295ae70685a907693b1a4b148e942dd57964e" exitCode=0 Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.710744 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a951a7fb-4de6-4238-913a-2b072052bd9e","Type":"ContainerDied","Data":"9ac52252a056d71be4ee3c80e8c295ae70685a907693b1a4b148e942dd57964e"} Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.714065 4820 generic.go:334] "Generic (PLEG): container finished" podID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerID="d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1" exitCode=0 Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.714092 4820 generic.go:334] "Generic (PLEG): container finished" podID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerID="57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09" exitCode=143 Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.714155 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-k2nl7" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.714549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bc185d-fe0d-4270-a531-d60cd7b9ef95","Type":"ContainerDied","Data":"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1"} Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.714587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bc185d-fe0d-4270-a531-d60cd7b9ef95","Type":"ContainerDied","Data":"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09"} Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.714597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bc185d-fe0d-4270-a531-d60cd7b9ef95","Type":"ContainerDied","Data":"cb60efa2309febd06eb949d2505e7d82cc2d2c4ef5063740c8a243911988b6c0"} Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.714615 4820 scope.go:117] "RemoveContainer" containerID="d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.714615 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.745932 4820 scope.go:117] "RemoveContainer" containerID="57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.775086 4820 scope.go:117] "RemoveContainer" containerID="d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1" Feb 01 14:39:51 crc kubenswrapper[4820]: E0201 14:39:51.779861 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1\": container with ID starting with d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1 not found: ID does not exist" containerID="d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.779927 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1"} err="failed to get container status \"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1\": rpc error: code = NotFound desc = could not find container \"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1\": container with ID starting with d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1 not found: ID does not exist" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.779959 4820 scope.go:117] "RemoveContainer" containerID="57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09" Feb 01 14:39:51 crc kubenswrapper[4820]: E0201 14:39:51.782547 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09\": container with ID starting with 57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09 not found: ID does not exist" containerID="57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.782582 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09"} err="failed to get container status \"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09\": rpc error: code = NotFound desc = could not find container \"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09\": container with ID starting with 57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09 not found: ID does not exist" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.782630 4820 scope.go:117] "RemoveContainer" containerID="d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.784544 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1"} err="failed to get container status \"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1\": rpc error: code = NotFound desc = could not find container \"d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1\": container with ID starting with d0ded5dddd3a88965590f906d9886ecda00a7735e053afa6c978c28b96b1e3b1 not found: ID does not exist" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.784586 4820 scope.go:117] "RemoveContainer" containerID="57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.786158 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.786594 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09"} err="failed to get container status \"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09\": rpc error: code = NotFound desc = could not find container \"57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09\": container with ID starting with 57b62dadb02e8a5b8b7f27e84a4dc51e75382dc7aaa369c2a783d73b57141a09 not found: ID does not exist" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.793424 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-k2nl7"] Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.831657 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-k2nl7"] Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.840173 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.844658 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.855908 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 01 14:39:51 crc kubenswrapper[4820]: E0201 14:39:51.856665 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerName="nova-metadata-log" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.856685 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerName="nova-metadata-log" Feb 01 14:39:51 crc kubenswrapper[4820]: E0201 14:39:51.856709 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a951a7fb-4de6-4238-913a-2b072052bd9e" containerName="nova-scheduler-scheduler" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.856719 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a951a7fb-4de6-4238-913a-2b072052bd9e" containerName="nova-scheduler-scheduler" Feb 01 14:39:51 crc kubenswrapper[4820]: E0201 14:39:51.856739 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerName="nova-metadata-metadata" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.856746 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerName="nova-metadata-metadata" Feb 01 14:39:51 crc kubenswrapper[4820]: E0201 14:39:51.856759 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495c2732-0847-4e56-a609-1a24244a4969" containerName="dnsmasq-dns" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.856765 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="495c2732-0847-4e56-a609-1a24244a4969" containerName="dnsmasq-dns" Feb 01 14:39:51 crc kubenswrapper[4820]: E0201 14:39:51.856773 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495c2732-0847-4e56-a609-1a24244a4969" containerName="init" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.856778 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="495c2732-0847-4e56-a609-1a24244a4969" containerName="init" Feb 01 14:39:51 crc kubenswrapper[4820]: E0201 14:39:51.856786 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fdf904-8a91-45c5-8f1a-0fd56291b77e" containerName="nova-cell1-conductor-db-sync" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.856793 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fdf904-8a91-45c5-8f1a-0fd56291b77e" containerName="nova-cell1-conductor-db-sync" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.857009 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerName="nova-metadata-log" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.857021 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fdf904-8a91-45c5-8f1a-0fd56291b77e" containerName="nova-cell1-conductor-db-sync" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.857032 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" containerName="nova-metadata-metadata" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.857050 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a951a7fb-4de6-4238-913a-2b072052bd9e" containerName="nova-scheduler-scheduler" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.857060 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="495c2732-0847-4e56-a609-1a24244a4969" containerName="dnsmasq-dns" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.857766 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.861706 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.870629 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.872460 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.874778 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.877368 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-combined-ca-bundle\") pod \"a951a7fb-4de6-4238-913a-2b072052bd9e\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.877497 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-config-data\") pod \"a951a7fb-4de6-4238-913a-2b072052bd9e\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.877581 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbmc\" (UniqueName: \"kubernetes.io/projected/a951a7fb-4de6-4238-913a-2b072052bd9e-kube-api-access-ztbmc\") pod \"a951a7fb-4de6-4238-913a-2b072052bd9e\" (UID: \"a951a7fb-4de6-4238-913a-2b072052bd9e\") " Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.879561 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.883527 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.887628 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a951a7fb-4de6-4238-913a-2b072052bd9e-kube-api-access-ztbmc" (OuterVolumeSpecName: "kube-api-access-ztbmc") pod "a951a7fb-4de6-4238-913a-2b072052bd9e" (UID: "a951a7fb-4de6-4238-913a-2b072052bd9e"). InnerVolumeSpecName "kube-api-access-ztbmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.905759 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.918967 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a951a7fb-4de6-4238-913a-2b072052bd9e" (UID: "a951a7fb-4de6-4238-913a-2b072052bd9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.921199 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-config-data" (OuterVolumeSpecName: "config-data") pod "a951a7fb-4de6-4238-913a-2b072052bd9e" (UID: "a951a7fb-4de6-4238-913a-2b072052bd9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.979763 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae340692-583b-402a-8638-a9d9d9442c08-logs\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.979814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqf7j\" (UniqueName: \"kubernetes.io/projected/1f6d787d-a27d-4f53-aa4c-794b09283f9e-kube-api-access-wqf7j\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.979860 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6d787d-a27d-4f53-aa4c-794b09283f9e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.980027 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgvr\" (UniqueName: \"kubernetes.io/projected/ae340692-583b-402a-8638-a9d9d9442c08-kube-api-access-2vgvr\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.980193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.980335 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6d787d-a27d-4f53-aa4c-794b09283f9e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.980437 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.980501 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-config-data\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.980614 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbmc\" (UniqueName: \"kubernetes.io/projected/a951a7fb-4de6-4238-913a-2b072052bd9e-kube-api-access-ztbmc\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.980636 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:51 crc kubenswrapper[4820]: I0201 14:39:51.980646 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a951a7fb-4de6-4238-913a-2b072052bd9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.082340 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6d787d-a27d-4f53-aa4c-794b09283f9e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.082410 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgvr\" (UniqueName: \"kubernetes.io/projected/ae340692-583b-402a-8638-a9d9d9442c08-kube-api-access-2vgvr\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.082466 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.082519 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6d787d-a27d-4f53-aa4c-794b09283f9e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.082562 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.082604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-config-data\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.082683 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae340692-583b-402a-8638-a9d9d9442c08-logs\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.082715 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqf7j\" (UniqueName: \"kubernetes.io/projected/1f6d787d-a27d-4f53-aa4c-794b09283f9e-kube-api-access-wqf7j\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.083260 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae340692-583b-402a-8638-a9d9d9442c08-logs\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.086718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6d787d-a27d-4f53-aa4c-794b09283f9e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.086718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-config-data\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.087192 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.087487 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.092917 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6d787d-a27d-4f53-aa4c-794b09283f9e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.101255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqf7j\" (UniqueName: \"kubernetes.io/projected/1f6d787d-a27d-4f53-aa4c-794b09283f9e-kube-api-access-wqf7j\") pod \"nova-cell1-conductor-0\" (UID: \"1f6d787d-a27d-4f53-aa4c-794b09283f9e\") " pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.110255 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgvr\" (UniqueName: \"kubernetes.io/projected/ae340692-583b-402a-8638-a9d9d9442c08-kube-api-access-2vgvr\") pod \"nova-metadata-0\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.190623 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.202531 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.695644 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 01 14:39:52 crc kubenswrapper[4820]: W0201 14:39:52.701646 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f6d787d_a27d_4f53_aa4c_794b09283f9e.slice/crio-72277e3a8efe632e9ad5b94e6cad1ad15e897627077bb766836c6333add0d500 WatchSource:0}: Error finding container 72277e3a8efe632e9ad5b94e6cad1ad15e897627077bb766836c6333add0d500: Status 404 returned error can't find the container with id 72277e3a8efe632e9ad5b94e6cad1ad15e897627077bb766836c6333add0d500 Feb 01 14:39:52 crc kubenswrapper[4820]: W0201 14:39:52.703112 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae340692_583b_402a_8638_a9d9d9442c08.slice/crio-dc6209d89916ae974223d9762d565dbddc04196ccbffa4293552161305b81a1f WatchSource:0}: Error finding container dc6209d89916ae974223d9762d565dbddc04196ccbffa4293552161305b81a1f: Status 404 returned error can't find the container with id dc6209d89916ae974223d9762d565dbddc04196ccbffa4293552161305b81a1f Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.705158 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.724372 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae340692-583b-402a-8638-a9d9d9442c08","Type":"ContainerStarted","Data":"dc6209d89916ae974223d9762d565dbddc04196ccbffa4293552161305b81a1f"} Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.726054 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a951a7fb-4de6-4238-913a-2b072052bd9e","Type":"ContainerDied","Data":"63708ee9eef9bc94b041cf70526ce066c79573215bafe7945f14dd62154808ea"} Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.726085 4820 scope.go:117] "RemoveContainer" containerID="9ac52252a056d71be4ee3c80e8c295ae70685a907693b1a4b148e942dd57964e" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.726182 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.733369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerStarted","Data":"d7b807f29b9b9d4dd6290ad6658ff984be0c7f39da1b0d65122af5f40804471a"} Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.736309 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1f6d787d-a27d-4f53-aa4c-794b09283f9e","Type":"ContainerStarted","Data":"72277e3a8efe632e9ad5b94e6cad1ad15e897627077bb766836c6333add0d500"} Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.784045 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.800981 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.809553 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.810847 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.812603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.820834 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.895033 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.895122 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrd55\" (UniqueName: \"kubernetes.io/projected/4102147a-54c1-4f63-8f88-92e634a6db94-kube-api-access-jrd55\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.895228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-config-data\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.996552 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.996633 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrd55\" (UniqueName: \"kubernetes.io/projected/4102147a-54c1-4f63-8f88-92e634a6db94-kube-api-access-jrd55\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:52 crc kubenswrapper[4820]: I0201 14:39:52.996655 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-config-data\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.002574 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.002690 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-config-data\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.013652 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrd55\" (UniqueName: \"kubernetes.io/projected/4102147a-54c1-4f63-8f88-92e634a6db94-kube-api-access-jrd55\") pod \"nova-scheduler-0\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " pod="openstack/nova-scheduler-0" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.208570 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495c2732-0847-4e56-a609-1a24244a4969" path="/var/lib/kubelet/pods/495c2732-0847-4e56-a609-1a24244a4969/volumes" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.209336 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a951a7fb-4de6-4238-913a-2b072052bd9e" path="/var/lib/kubelet/pods/a951a7fb-4de6-4238-913a-2b072052bd9e/volumes" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.210010 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bc185d-fe0d-4270-a531-d60cd7b9ef95" path="/var/lib/kubelet/pods/f0bc185d-fe0d-4270-a531-d60cd7b9ef95/volumes" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.279433 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:39:53 crc kubenswrapper[4820]: W0201 14:39:53.710520 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4102147a_54c1_4f63_8f88_92e634a6db94.slice/crio-8501834a38075b2aaefb385f30241c3e9583eeaa70397b535e1049846274a48c WatchSource:0}: Error finding container 8501834a38075b2aaefb385f30241c3e9583eeaa70397b535e1049846274a48c: Status 404 returned error can't find the container with id 8501834a38075b2aaefb385f30241c3e9583eeaa70397b535e1049846274a48c Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.718055 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.750515 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1f6d787d-a27d-4f53-aa4c-794b09283f9e","Type":"ContainerStarted","Data":"7f59e4a5d277d08bddbeaa514d6193b4f4e5e150992f5318da5225771f732081"} Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.752213 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.757225 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4102147a-54c1-4f63-8f88-92e634a6db94","Type":"ContainerStarted","Data":"8501834a38075b2aaefb385f30241c3e9583eeaa70397b535e1049846274a48c"} Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.760108 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae340692-583b-402a-8638-a9d9d9442c08","Type":"ContainerStarted","Data":"be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b"} Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.760207 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae340692-583b-402a-8638-a9d9d9442c08","Type":"ContainerStarted","Data":"f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c"} Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.763718 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerStarted","Data":"29e9754534bbaaa830648fdb4b48ee3d38e15f4be76356c4e1559451d414159b"} Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.763789 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerStarted","Data":"ca1105ff3ced59cc2514e0ba69b2cb47dc78a94c525fc89dd4ed9d0a43de6758"} Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.772609 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.772570939 podStartE2EDuration="2.772570939s" podCreationTimestamp="2026-02-01 14:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:53.768774016 +0000 UTC m=+1135.289140320" watchObservedRunningTime="2026-02-01 14:39:53.772570939 +0000 UTC m=+1135.292937213" Feb 01 14:39:53 crc kubenswrapper[4820]: I0201 14:39:53.796631 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.796609567 podStartE2EDuration="2.796609567s" podCreationTimestamp="2026-02-01 14:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:53.7832433 +0000 UTC m=+1135.303609584" watchObservedRunningTime="2026-02-01 14:39:53.796609567 +0000 UTC m=+1135.316975851" Feb 01 14:39:54 crc kubenswrapper[4820]: I0201 14:39:54.777847 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4102147a-54c1-4f63-8f88-92e634a6db94","Type":"ContainerStarted","Data":"a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37"} Feb 01 14:39:54 crc kubenswrapper[4820]: I0201 14:39:54.806440 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.806420057 podStartE2EDuration="2.806420057s" podCreationTimestamp="2026-02-01 14:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:54.799568339 +0000 UTC m=+1136.319934623" watchObservedRunningTime="2026-02-01 14:39:54.806420057 +0000 UTC m=+1136.326786341" Feb 01 14:39:55 crc kubenswrapper[4820]: E0201 14:39:55.066644 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8255e6c_59ea_4449_bcec_264a12bf6d6e.slice/crio-76936d492ceac0fbfeeede8ea87447c86774da965e25e10dd874d13d7252157c\": RecentStats: unable to find data in memory cache]" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.536021 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.555769 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqx8c\" (UniqueName: \"kubernetes.io/projected/9ae750ad-27c5-4154-b0b2-ce81412b109a-kube-api-access-gqx8c\") pod \"9ae750ad-27c5-4154-b0b2-ce81412b109a\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.555903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-config-data\") pod \"9ae750ad-27c5-4154-b0b2-ce81412b109a\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.555974 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae750ad-27c5-4154-b0b2-ce81412b109a-logs\") pod \"9ae750ad-27c5-4154-b0b2-ce81412b109a\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.556024 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-combined-ca-bundle\") pod \"9ae750ad-27c5-4154-b0b2-ce81412b109a\" (UID: \"9ae750ad-27c5-4154-b0b2-ce81412b109a\") " Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.556453 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae750ad-27c5-4154-b0b2-ce81412b109a-logs" (OuterVolumeSpecName: "logs") pod "9ae750ad-27c5-4154-b0b2-ce81412b109a" (UID: "9ae750ad-27c5-4154-b0b2-ce81412b109a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.562300 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae750ad-27c5-4154-b0b2-ce81412b109a-kube-api-access-gqx8c" (OuterVolumeSpecName: "kube-api-access-gqx8c") pod "9ae750ad-27c5-4154-b0b2-ce81412b109a" (UID: "9ae750ad-27c5-4154-b0b2-ce81412b109a"). InnerVolumeSpecName "kube-api-access-gqx8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.591037 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-config-data" (OuterVolumeSpecName: "config-data") pod "9ae750ad-27c5-4154-b0b2-ce81412b109a" (UID: "9ae750ad-27c5-4154-b0b2-ce81412b109a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.596567 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ae750ad-27c5-4154-b0b2-ce81412b109a" (UID: "9ae750ad-27c5-4154-b0b2-ce81412b109a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.658418 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.658470 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae750ad-27c5-4154-b0b2-ce81412b109a-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.658484 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae750ad-27c5-4154-b0b2-ce81412b109a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.658498 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqx8c\" (UniqueName: \"kubernetes.io/projected/9ae750ad-27c5-4154-b0b2-ce81412b109a-kube-api-access-gqx8c\") on node \"crc\" DevicePath \"\"" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.792636 4820 generic.go:334] "Generic (PLEG): container finished" podID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerID="4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce" exitCode=0 Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.793080 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ae750ad-27c5-4154-b0b2-ce81412b109a","Type":"ContainerDied","Data":"4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce"} Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.793109 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ae750ad-27c5-4154-b0b2-ce81412b109a","Type":"ContainerDied","Data":"e644a9caa3e3bb80e5af498ba1edc63e601f4c8a78fe029c0eedfa45f83dbba3"} Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.793125 4820 scope.go:117] "RemoveContainer" containerID="4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.793219 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.803060 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerStarted","Data":"473212e3c05a86f51ca9a5994be39b2d9e505dbc46a8bd44eaae1ac821c63420"} Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.803355 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.817247 4820 scope.go:117] "RemoveContainer" containerID="d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.851332 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.800599292 podStartE2EDuration="6.851309156s" podCreationTimestamp="2026-02-01 14:39:50 +0000 UTC" firstStartedPulling="2026-02-01 14:39:50.921690424 +0000 UTC m=+1132.442056708" lastFinishedPulling="2026-02-01 14:39:55.972400288 +0000 UTC m=+1137.492766572" observedRunningTime="2026-02-01 14:39:56.826779106 +0000 UTC m=+1138.347145390" watchObservedRunningTime="2026-02-01 14:39:56.851309156 +0000 UTC m=+1138.371675430" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.853433 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.871212 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.877637 4820 scope.go:117] "RemoveContainer" containerID="4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce" Feb 01 14:39:56 crc kubenswrapper[4820]: E0201 14:39:56.879332 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce\": container with ID starting with 4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce not found: ID does not exist" containerID="4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.879618 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce"} err="failed to get container status \"4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce\": rpc error: code = NotFound desc = could not find container \"4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce\": container with ID starting with 4ac51585a221f0f3f34e50d6ba04d50f8f6e5e22d1ab2a4f2e6ecf0c4944abce not found: ID does not exist" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.879704 4820 scope.go:117] "RemoveContainer" containerID="d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766" Feb 01 14:39:56 crc kubenswrapper[4820]: E0201 14:39:56.880652 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766\": container with ID starting with d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766 not found: ID does not exist" containerID="d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.880701 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766"} err="failed to get container status \"d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766\": rpc error: code = NotFound desc = could not find container \"d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766\": container with ID starting with d1bd213fdd87d366f2f0fe56390b363a5c589f8bd4343aed48eaf119a47b2766 not found: ID does not exist" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.882466 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:56 crc kubenswrapper[4820]: E0201 14:39:56.882848 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-log" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.882866 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-log" Feb 01 14:39:56 crc kubenswrapper[4820]: E0201 14:39:56.882903 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-api" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.882910 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-api" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.883108 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-log" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.883140 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" containerName="nova-api-api" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.884086 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.890267 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.917276 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.962618 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-config-data\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.962673 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2f2t\" (UniqueName: \"kubernetes.io/projected/541a0f9b-7169-48b1-93bd-a9ae101254ac-kube-api-access-j2f2t\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.962739 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541a0f9b-7169-48b1-93bd-a9ae101254ac-logs\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:56 crc kubenswrapper[4820]: I0201 14:39:56.962784 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.063797 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.063897 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-config-data\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.063925 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2f2t\" (UniqueName: \"kubernetes.io/projected/541a0f9b-7169-48b1-93bd-a9ae101254ac-kube-api-access-j2f2t\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.063981 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541a0f9b-7169-48b1-93bd-a9ae101254ac-logs\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.064343 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541a0f9b-7169-48b1-93bd-a9ae101254ac-logs\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.069111 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-config-data\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.070120 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.082754 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2f2t\" (UniqueName: \"kubernetes.io/projected/541a0f9b-7169-48b1-93bd-a9ae101254ac-kube-api-access-j2f2t\") pod \"nova-api-0\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.209666 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae750ad-27c5-4154-b0b2-ce81412b109a" path="/var/lib/kubelet/pods/9ae750ad-27c5-4154-b0b2-ce81412b109a/volumes" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.210465 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.210503 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.240534 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:39:57 crc kubenswrapper[4820]: W0201 14:39:57.673192 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod541a0f9b_7169_48b1_93bd_a9ae101254ac.slice/crio-edad7e8a462b92ecd6d4f209eb8efb3e7c143473f9b5211331132a9955955b13 WatchSource:0}: Error finding container edad7e8a462b92ecd6d4f209eb8efb3e7c143473f9b5211331132a9955955b13: Status 404 returned error can't find the container with id edad7e8a462b92ecd6d4f209eb8efb3e7c143473f9b5211331132a9955955b13 Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.674563 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:39:57 crc kubenswrapper[4820]: I0201 14:39:57.815835 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"541a0f9b-7169-48b1-93bd-a9ae101254ac","Type":"ContainerStarted","Data":"edad7e8a462b92ecd6d4f209eb8efb3e7c143473f9b5211331132a9955955b13"} Feb 01 14:39:58 crc kubenswrapper[4820]: I0201 14:39:58.281130 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 01 14:39:58 crc kubenswrapper[4820]: I0201 14:39:58.828732 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"541a0f9b-7169-48b1-93bd-a9ae101254ac","Type":"ContainerStarted","Data":"c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0"} Feb 01 14:39:58 crc kubenswrapper[4820]: I0201 14:39:58.830003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"541a0f9b-7169-48b1-93bd-a9ae101254ac","Type":"ContainerStarted","Data":"5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4"} Feb 01 14:39:58 crc kubenswrapper[4820]: I0201 14:39:58.854659 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.854636488 podStartE2EDuration="2.854636488s" podCreationTimestamp="2026-02-01 14:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:39:58.843070936 +0000 UTC m=+1140.363437230" watchObservedRunningTime="2026-02-01 14:39:58.854636488 +0000 UTC m=+1140.375002792" Feb 01 14:40:02 crc kubenswrapper[4820]: I0201 14:40:02.203610 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 01 14:40:02 crc kubenswrapper[4820]: I0201 14:40:02.203968 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 01 14:40:02 crc kubenswrapper[4820]: I0201 14:40:02.219069 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 01 14:40:03 crc kubenswrapper[4820]: I0201 14:40:03.220149 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 01 14:40:03 crc kubenswrapper[4820]: I0201 14:40:03.220217 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 01 14:40:03 crc kubenswrapper[4820]: I0201 14:40:03.281040 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 01 14:40:03 crc kubenswrapper[4820]: I0201 14:40:03.314103 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 01 14:40:03 crc kubenswrapper[4820]: I0201 14:40:03.914666 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 01 14:40:07 crc kubenswrapper[4820]: I0201 14:40:07.241202 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 01 14:40:07 crc kubenswrapper[4820]: I0201 14:40:07.241687 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 01 14:40:08 crc kubenswrapper[4820]: I0201 14:40:08.324156 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 01 14:40:08 crc kubenswrapper[4820]: I0201 14:40:08.324150 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 01 14:40:12 crc kubenswrapper[4820]: I0201 14:40:12.208564 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 01 14:40:12 crc kubenswrapper[4820]: I0201 14:40:12.209659 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 01 14:40:12 crc kubenswrapper[4820]: I0201 14:40:12.218126 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 01 14:40:12 crc kubenswrapper[4820]: I0201 14:40:12.969576 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 01 14:40:14 crc kubenswrapper[4820]: I0201 14:40:14.947343 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:14 crc kubenswrapper[4820]: I0201 14:40:14.979451 4820 generic.go:334] "Generic (PLEG): container finished" podID="f5d316bc-edb0-4779-836c-ee3368e84b91" containerID="6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57" exitCode=137 Feb 01 14:40:14 crc kubenswrapper[4820]: I0201 14:40:14.979544 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:14 crc kubenswrapper[4820]: I0201 14:40:14.979560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f5d316bc-edb0-4779-836c-ee3368e84b91","Type":"ContainerDied","Data":"6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57"} Feb 01 14:40:14 crc kubenswrapper[4820]: I0201 14:40:14.979611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f5d316bc-edb0-4779-836c-ee3368e84b91","Type":"ContainerDied","Data":"cde99c3c2ad6b24e1e7a78fea6b13001ad5b2fef2a24d9e012dd25d240367b81"} Feb 01 14:40:14 crc kubenswrapper[4820]: I0201 14:40:14.979639 4820 scope.go:117] "RemoveContainer" containerID="6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.001467 4820 scope.go:117] "RemoveContainer" containerID="6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57" Feb 01 14:40:15 crc kubenswrapper[4820]: E0201 14:40:15.001934 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57\": container with ID starting with 6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57 not found: ID does not exist" containerID="6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.002008 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57"} err="failed to get container status \"6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57\": rpc error: code = NotFound desc = could not find container \"6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57\": container with ID starting with 6b1c6efa37267f22ed755d48068b567b61d22c9489150b53e0aaedff80891d57 not found: ID does not exist" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.118032 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-combined-ca-bundle\") pod \"f5d316bc-edb0-4779-836c-ee3368e84b91\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.118141 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-config-data\") pod \"f5d316bc-edb0-4779-836c-ee3368e84b91\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.118206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z8tv\" (UniqueName: \"kubernetes.io/projected/f5d316bc-edb0-4779-836c-ee3368e84b91-kube-api-access-9z8tv\") pod \"f5d316bc-edb0-4779-836c-ee3368e84b91\" (UID: \"f5d316bc-edb0-4779-836c-ee3368e84b91\") " Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.124735 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d316bc-edb0-4779-836c-ee3368e84b91-kube-api-access-9z8tv" (OuterVolumeSpecName: "kube-api-access-9z8tv") pod "f5d316bc-edb0-4779-836c-ee3368e84b91" (UID: "f5d316bc-edb0-4779-836c-ee3368e84b91"). InnerVolumeSpecName "kube-api-access-9z8tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.146334 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-config-data" (OuterVolumeSpecName: "config-data") pod "f5d316bc-edb0-4779-836c-ee3368e84b91" (UID: "f5d316bc-edb0-4779-836c-ee3368e84b91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.151549 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5d316bc-edb0-4779-836c-ee3368e84b91" (UID: "f5d316bc-edb0-4779-836c-ee3368e84b91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.219999 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.220063 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d316bc-edb0-4779-836c-ee3368e84b91-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.220081 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z8tv\" (UniqueName: \"kubernetes.io/projected/f5d316bc-edb0-4779-836c-ee3368e84b91-kube-api-access-9z8tv\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.308097 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.325174 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.337451 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:40:15 crc kubenswrapper[4820]: E0201 14:40:15.337952 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d316bc-edb0-4779-836c-ee3368e84b91" containerName="nova-cell1-novncproxy-novncproxy" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.337974 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d316bc-edb0-4779-836c-ee3368e84b91" containerName="nova-cell1-novncproxy-novncproxy" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.338475 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d316bc-edb0-4779-836c-ee3368e84b91" containerName="nova-cell1-novncproxy-novncproxy" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.340857 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.342987 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.343156 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.344105 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.364461 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.526369 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.526452 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.526489 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.526517 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.526593 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8n6\" (UniqueName: \"kubernetes.io/projected/75b482ca-64e8-42a9-b8f2-5272e591448b-kube-api-access-xp8n6\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.628453 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.628554 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.628624 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.628663 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.629135 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp8n6\" (UniqueName: \"kubernetes.io/projected/75b482ca-64e8-42a9-b8f2-5272e591448b-kube-api-access-xp8n6\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.634694 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.638393 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.638938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.641956 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b482ca-64e8-42a9-b8f2-5272e591448b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.644779 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp8n6\" (UniqueName: \"kubernetes.io/projected/75b482ca-64e8-42a9-b8f2-5272e591448b-kube-api-access-xp8n6\") pod \"nova-cell1-novncproxy-0\" (UID: \"75b482ca-64e8-42a9-b8f2-5272e591448b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.668835 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.934241 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 01 14:40:15 crc kubenswrapper[4820]: W0201 14:40:15.936596 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b482ca_64e8_42a9_b8f2_5272e591448b.slice/crio-a70f5ad1c116f8d6ce7ab156b54b7d696bcf1820334fae5d5a129a5bc90300ff WatchSource:0}: Error finding container a70f5ad1c116f8d6ce7ab156b54b7d696bcf1820334fae5d5a129a5bc90300ff: Status 404 returned error can't find the container with id a70f5ad1c116f8d6ce7ab156b54b7d696bcf1820334fae5d5a129a5bc90300ff Feb 01 14:40:15 crc kubenswrapper[4820]: I0201 14:40:15.988088 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75b482ca-64e8-42a9-b8f2-5272e591448b","Type":"ContainerStarted","Data":"a70f5ad1c116f8d6ce7ab156b54b7d696bcf1820334fae5d5a129a5bc90300ff"} Feb 01 14:40:17 crc kubenswrapper[4820]: I0201 14:40:17.008941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75b482ca-64e8-42a9-b8f2-5272e591448b","Type":"ContainerStarted","Data":"8a661a2841d9243f422a98996af81eb8d60ed91aabdd93b2e6ea413288f0ddaf"} Feb 01 14:40:17 crc kubenswrapper[4820]: I0201 14:40:17.028840 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.028824501 podStartE2EDuration="2.028824501s" podCreationTimestamp="2026-02-01 14:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:40:17.024403423 +0000 UTC m=+1158.544769757" watchObservedRunningTime="2026-02-01 14:40:17.028824501 +0000 UTC m=+1158.549190785" Feb 01 14:40:17 crc kubenswrapper[4820]: I0201 14:40:17.211740 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d316bc-edb0-4779-836c-ee3368e84b91" path="/var/lib/kubelet/pods/f5d316bc-edb0-4779-836c-ee3368e84b91/volumes" Feb 01 14:40:17 crc kubenswrapper[4820]: I0201 14:40:17.244869 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 01 14:40:17 crc kubenswrapper[4820]: I0201 14:40:17.245414 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 01 14:40:17 crc kubenswrapper[4820]: I0201 14:40:17.245535 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 01 14:40:17 crc kubenswrapper[4820]: I0201 14:40:17.248789 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.018925 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.024500 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.231595 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-w7xrd"] Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.233511 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.245591 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-w7xrd"] Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.383111 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.383157 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-config\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.383196 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.383214 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.383238 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h289r\" (UniqueName: \"kubernetes.io/projected/558aab28-1ba2-46cc-9504-405fc50f326f-kube-api-access-h289r\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.485169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.485240 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-config\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.485312 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.485339 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.485390 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h289r\" (UniqueName: \"kubernetes.io/projected/558aab28-1ba2-46cc-9504-405fc50f326f-kube-api-access-h289r\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.486456 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-config\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.486597 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.486707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.486910 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.519902 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h289r\" (UniqueName: \"kubernetes.io/projected/558aab28-1ba2-46cc-9504-405fc50f326f-kube-api-access-h289r\") pod \"dnsmasq-dns-68d4b6d797-w7xrd\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.565082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:18 crc kubenswrapper[4820]: I0201 14:40:18.997647 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-w7xrd"] Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.026516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" event={"ID":"558aab28-1ba2-46cc-9504-405fc50f326f","Type":"ContainerStarted","Data":"41168bc1d0de78f41962dd0ea7ebe627255bbd785820ae04354849629ca3823d"} Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.242650 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.243056 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.243106 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.243776 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18339407f9a7bd299b3086430fb392c91bda2f44145a4ddfb153b9aef0bd2fe6"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.243837 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://18339407f9a7bd299b3086430fb392c91bda2f44145a4ddfb153b9aef0bd2fe6" gracePeriod=600 Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.373037 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.373308 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="ceilometer-central-agent" containerID="cri-o://d7b807f29b9b9d4dd6290ad6658ff984be0c7f39da1b0d65122af5f40804471a" gracePeriod=30 Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.373397 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="ceilometer-notification-agent" containerID="cri-o://ca1105ff3ced59cc2514e0ba69b2cb47dc78a94c525fc89dd4ed9d0a43de6758" gracePeriod=30 Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.373401 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="sg-core" containerID="cri-o://29e9754534bbaaa830648fdb4b48ee3d38e15f4be76356c4e1559451d414159b" gracePeriod=30 Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.373512 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="proxy-httpd" containerID="cri-o://473212e3c05a86f51ca9a5994be39b2d9e505dbc46a8bd44eaae1ac821c63420" gracePeriod=30 Feb 01 14:40:19 crc kubenswrapper[4820]: I0201 14:40:19.477438 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.176:3000/\": read tcp 10.217.0.2:50380->10.217.0.176:3000: read: connection reset by peer" Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.052428 4820 generic.go:334] "Generic (PLEG): container finished" podID="36042293-d59a-4aab-b851-e06233b41191" containerID="473212e3c05a86f51ca9a5994be39b2d9e505dbc46a8bd44eaae1ac821c63420" exitCode=0 Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.052791 4820 generic.go:334] "Generic (PLEG): container finished" podID="36042293-d59a-4aab-b851-e06233b41191" containerID="29e9754534bbaaa830648fdb4b48ee3d38e15f4be76356c4e1559451d414159b" exitCode=2 Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.052809 4820 generic.go:334] "Generic (PLEG): container finished" podID="36042293-d59a-4aab-b851-e06233b41191" containerID="d7b807f29b9b9d4dd6290ad6658ff984be0c7f39da1b0d65122af5f40804471a" exitCode=0 Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.052800 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerDied","Data":"473212e3c05a86f51ca9a5994be39b2d9e505dbc46a8bd44eaae1ac821c63420"} Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.052862 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerDied","Data":"29e9754534bbaaa830648fdb4b48ee3d38e15f4be76356c4e1559451d414159b"} Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.052921 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerDied","Data":"d7b807f29b9b9d4dd6290ad6658ff984be0c7f39da1b0d65122af5f40804471a"} Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.054336 4820 generic.go:334] "Generic (PLEG): container finished" podID="558aab28-1ba2-46cc-9504-405fc50f326f" containerID="7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98" exitCode=0 Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.054406 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" event={"ID":"558aab28-1ba2-46cc-9504-405fc50f326f","Type":"ContainerDied","Data":"7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98"} Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.057565 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="18339407f9a7bd299b3086430fb392c91bda2f44145a4ddfb153b9aef0bd2fe6" exitCode=0 Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.057642 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"18339407f9a7bd299b3086430fb392c91bda2f44145a4ddfb153b9aef0bd2fe6"} Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.057676 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"2741853f96eda4464b2d41189acc694c25bdbd3ba9a0c898eedf42557ed1eae0"} Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.057699 4820 scope.go:117] "RemoveContainer" containerID="3b1868d25809e0a94d683b5755d093ee9d0de6decf94341a6eb437233a52b5e1" Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.387940 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.176:3000/\": dial tcp 10.217.0.176:3000: connect: connection refused" Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.474383 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:20 crc kubenswrapper[4820]: I0201 14:40:20.669324 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:21 crc kubenswrapper[4820]: I0201 14:40:21.069783 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" event={"ID":"558aab28-1ba2-46cc-9504-405fc50f326f","Type":"ContainerStarted","Data":"b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8"} Feb 01 14:40:21 crc kubenswrapper[4820]: I0201 14:40:21.069962 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-log" containerID="cri-o://5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4" gracePeriod=30 Feb 01 14:40:21 crc kubenswrapper[4820]: I0201 14:40:21.070028 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:21 crc kubenswrapper[4820]: I0201 14:40:21.070029 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-api" containerID="cri-o://c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0" gracePeriod=30 Feb 01 14:40:22 crc kubenswrapper[4820]: I0201 14:40:22.078155 4820 generic.go:334] "Generic (PLEG): container finished" podID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerID="5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4" exitCode=143 Feb 01 14:40:22 crc kubenswrapper[4820]: I0201 14:40:22.078796 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"541a0f9b-7169-48b1-93bd-a9ae101254ac","Type":"ContainerDied","Data":"5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4"} Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.098994 4820 generic.go:334] "Generic (PLEG): container finished" podID="36042293-d59a-4aab-b851-e06233b41191" containerID="ca1105ff3ced59cc2514e0ba69b2cb47dc78a94c525fc89dd4ed9d0a43de6758" exitCode=0 Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.099076 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerDied","Data":"ca1105ff3ced59cc2514e0ba69b2cb47dc78a94c525fc89dd4ed9d0a43de6758"} Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.425132 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.451557 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" podStartSLOduration=6.451539185 podStartE2EDuration="6.451539185s" podCreationTimestamp="2026-02-01 14:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:40:21.088410421 +0000 UTC m=+1162.608776715" watchObservedRunningTime="2026-02-01 14:40:24.451539185 +0000 UTC m=+1165.971905469" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.601601 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-sg-core-conf-yaml\") pod \"36042293-d59a-4aab-b851-e06233b41191\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.601711 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-log-httpd\") pod \"36042293-d59a-4aab-b851-e06233b41191\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.601763 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-config-data\") pod \"36042293-d59a-4aab-b851-e06233b41191\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.601835 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-run-httpd\") pod \"36042293-d59a-4aab-b851-e06233b41191\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.601917 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-combined-ca-bundle\") pod \"36042293-d59a-4aab-b851-e06233b41191\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.601947 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-scripts\") pod \"36042293-d59a-4aab-b851-e06233b41191\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.601988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnh2w\" (UniqueName: \"kubernetes.io/projected/36042293-d59a-4aab-b851-e06233b41191-kube-api-access-dnh2w\") pod \"36042293-d59a-4aab-b851-e06233b41191\" (UID: \"36042293-d59a-4aab-b851-e06233b41191\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.602555 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36042293-d59a-4aab-b851-e06233b41191" (UID: "36042293-d59a-4aab-b851-e06233b41191"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.602927 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36042293-d59a-4aab-b851-e06233b41191" (UID: "36042293-d59a-4aab-b851-e06233b41191"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.608413 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-scripts" (OuterVolumeSpecName: "scripts") pod "36042293-d59a-4aab-b851-e06233b41191" (UID: "36042293-d59a-4aab-b851-e06233b41191"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.626042 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36042293-d59a-4aab-b851-e06233b41191-kube-api-access-dnh2w" (OuterVolumeSpecName: "kube-api-access-dnh2w") pod "36042293-d59a-4aab-b851-e06233b41191" (UID: "36042293-d59a-4aab-b851-e06233b41191"). InnerVolumeSpecName "kube-api-access-dnh2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.641352 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36042293-d59a-4aab-b851-e06233b41191" (UID: "36042293-d59a-4aab-b851-e06233b41191"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.668194 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.699857 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36042293-d59a-4aab-b851-e06233b41191" (UID: "36042293-d59a-4aab-b851-e06233b41191"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.708326 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.708361 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.708374 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.708384 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnh2w\" (UniqueName: \"kubernetes.io/projected/36042293-d59a-4aab-b851-e06233b41191-kube-api-access-dnh2w\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.708399 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.708409 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36042293-d59a-4aab-b851-e06233b41191-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.721173 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-config-data" (OuterVolumeSpecName: "config-data") pod "36042293-d59a-4aab-b851-e06233b41191" (UID: "36042293-d59a-4aab-b851-e06233b41191"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.809966 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541a0f9b-7169-48b1-93bd-a9ae101254ac-logs\") pod \"541a0f9b-7169-48b1-93bd-a9ae101254ac\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.810361 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541a0f9b-7169-48b1-93bd-a9ae101254ac-logs" (OuterVolumeSpecName: "logs") pod "541a0f9b-7169-48b1-93bd-a9ae101254ac" (UID: "541a0f9b-7169-48b1-93bd-a9ae101254ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.810863 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-combined-ca-bundle\") pod \"541a0f9b-7169-48b1-93bd-a9ae101254ac\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.810988 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-config-data\") pod \"541a0f9b-7169-48b1-93bd-a9ae101254ac\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.811018 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2f2t\" (UniqueName: \"kubernetes.io/projected/541a0f9b-7169-48b1-93bd-a9ae101254ac-kube-api-access-j2f2t\") pod \"541a0f9b-7169-48b1-93bd-a9ae101254ac\" (UID: \"541a0f9b-7169-48b1-93bd-a9ae101254ac\") " Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.811680 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36042293-d59a-4aab-b851-e06233b41191-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.811695 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541a0f9b-7169-48b1-93bd-a9ae101254ac-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.815137 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541a0f9b-7169-48b1-93bd-a9ae101254ac-kube-api-access-j2f2t" (OuterVolumeSpecName: "kube-api-access-j2f2t") pod "541a0f9b-7169-48b1-93bd-a9ae101254ac" (UID: "541a0f9b-7169-48b1-93bd-a9ae101254ac"). InnerVolumeSpecName "kube-api-access-j2f2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.834831 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "541a0f9b-7169-48b1-93bd-a9ae101254ac" (UID: "541a0f9b-7169-48b1-93bd-a9ae101254ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.837251 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-config-data" (OuterVolumeSpecName: "config-data") pod "541a0f9b-7169-48b1-93bd-a9ae101254ac" (UID: "541a0f9b-7169-48b1-93bd-a9ae101254ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.913198 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.913238 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541a0f9b-7169-48b1-93bd-a9ae101254ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:24 crc kubenswrapper[4820]: I0201 14:40:24.913254 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2f2t\" (UniqueName: \"kubernetes.io/projected/541a0f9b-7169-48b1-93bd-a9ae101254ac-kube-api-access-j2f2t\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.112555 4820 generic.go:334] "Generic (PLEG): container finished" podID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerID="c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0" exitCode=0 Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.112617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"541a0f9b-7169-48b1-93bd-a9ae101254ac","Type":"ContainerDied","Data":"c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0"} Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.112647 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"541a0f9b-7169-48b1-93bd-a9ae101254ac","Type":"ContainerDied","Data":"edad7e8a462b92ecd6d4f209eb8efb3e7c143473f9b5211331132a9955955b13"} Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.112666 4820 scope.go:117] "RemoveContainer" containerID="c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.112803 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.120224 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36042293-d59a-4aab-b851-e06233b41191","Type":"ContainerDied","Data":"faf5197cac02cb6f2f7706a1184a08de7bc1a2d9c5c12be4ec4fd79a277c661d"} Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.120335 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.147580 4820 scope.go:117] "RemoveContainer" containerID="5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.159124 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.190810 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.206214 4820 scope.go:117] "RemoveContainer" containerID="c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0" Feb 01 14:40:25 crc kubenswrapper[4820]: E0201 14:40:25.206782 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0\": container with ID starting with c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0 not found: ID does not exist" containerID="c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.206832 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0"} err="failed to get container status \"c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0\": rpc error: code = NotFound desc = could not find container \"c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0\": container with ID starting with c8a689e0511f769f5dc7a87b918b5ee576750d5812b434e70ceefea4aee1f0c0 not found: ID does not exist" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.206865 4820 scope.go:117] "RemoveContainer" containerID="5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4" Feb 01 14:40:25 crc kubenswrapper[4820]: E0201 14:40:25.207311 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4\": container with ID starting with 5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4 not found: ID does not exist" containerID="5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.207343 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4"} err="failed to get container status \"5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4\": rpc error: code = NotFound desc = could not find container \"5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4\": container with ID starting with 5d82048f16d41e020a6cf78fd414bc88d253624c29e41b966fcefa90ac4789a4 not found: ID does not exist" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.207362 4820 scope.go:117] "RemoveContainer" containerID="473212e3c05a86f51ca9a5994be39b2d9e505dbc46a8bd44eaae1ac821c63420" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.233507 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" path="/var/lib/kubelet/pods/541a0f9b-7169-48b1-93bd-a9ae101254ac/volumes" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.234283 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:25 crc kubenswrapper[4820]: E0201 14:40:25.235816 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-log" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.235846 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-log" Feb 01 14:40:25 crc kubenswrapper[4820]: E0201 14:40:25.236187 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="proxy-httpd" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236210 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="proxy-httpd" Feb 01 14:40:25 crc kubenswrapper[4820]: E0201 14:40:25.236228 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="ceilometer-notification-agent" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236238 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="ceilometer-notification-agent" Feb 01 14:40:25 crc kubenswrapper[4820]: E0201 14:40:25.236275 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="ceilometer-central-agent" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236284 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="ceilometer-central-agent" Feb 01 14:40:25 crc kubenswrapper[4820]: E0201 14:40:25.236313 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-api" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236321 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-api" Feb 01 14:40:25 crc kubenswrapper[4820]: E0201 14:40:25.236333 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="sg-core" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236343 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="sg-core" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236581 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="sg-core" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236608 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-log" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236622 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="541a0f9b-7169-48b1-93bd-a9ae101254ac" containerName="nova-api-api" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236638 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="ceilometer-notification-agent" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236666 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="ceilometer-central-agent" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.236700 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36042293-d59a-4aab-b851-e06233b41191" containerName="proxy-httpd" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.242747 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.242788 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.242916 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.245691 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.246383 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.246590 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.252383 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.254796 4820 scope.go:117] "RemoveContainer" containerID="29e9754534bbaaa830648fdb4b48ee3d38e15f4be76356c4e1559451d414159b" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.256552 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.259133 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.261603 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.263694 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.274277 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.310252 4820 scope.go:117] "RemoveContainer" containerID="ca1105ff3ced59cc2514e0ba69b2cb47dc78a94c525fc89dd4ed9d0a43de6758" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.338802 4820 scope.go:117] "RemoveContainer" containerID="d7b807f29b9b9d4dd6290ad6658ff984be0c7f39da1b0d65122af5f40804471a" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426658 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-run-httpd\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426698 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426714 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426729 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426746 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-log-httpd\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426765 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-scripts\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426789 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35e5429-427c-4f54-ae6d-60648ef90eed-logs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426808 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-config-data\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426822 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-config-data\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426843 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-public-tls-certs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426863 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7qh6\" (UniqueName: \"kubernetes.io/projected/845e48b0-71b1-4f3b-82d2-db1f00c69601-kube-api-access-c7qh6\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.426897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwkxm\" (UniqueName: \"kubernetes.io/projected/a35e5429-427c-4f54-ae6d-60648ef90eed-kube-api-access-wwkxm\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529385 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529431 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-log-httpd\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-scripts\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35e5429-427c-4f54-ae6d-60648ef90eed-logs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529508 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-config-data\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529523 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-config-data\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529549 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-public-tls-certs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529576 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7qh6\" (UniqueName: \"kubernetes.io/projected/845e48b0-71b1-4f3b-82d2-db1f00c69601-kube-api-access-c7qh6\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwkxm\" (UniqueName: \"kubernetes.io/projected/a35e5429-427c-4f54-ae6d-60648ef90eed-kube-api-access-wwkxm\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529652 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.529697 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-run-httpd\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.530752 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-log-httpd\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.531560 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-run-httpd\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.531654 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35e5429-427c-4f54-ae6d-60648ef90eed-logs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.534945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.535071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.535609 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-config-data\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.535921 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.536221 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-scripts\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.536391 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-public-tls-certs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.536732 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-config-data\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.548611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7qh6\" (UniqueName: \"kubernetes.io/projected/845e48b0-71b1-4f3b-82d2-db1f00c69601-kube-api-access-c7qh6\") pod \"ceilometer-0\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.554458 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwkxm\" (UniqueName: \"kubernetes.io/projected/a35e5429-427c-4f54-ae6d-60648ef90eed-kube-api-access-wwkxm\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.559585 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.579673 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.597039 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.669688 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:25 crc kubenswrapper[4820]: I0201 14:40:25.709470 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:26 crc kubenswrapper[4820]: W0201 14:40:26.080206 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda35e5429_427c_4f54_ae6d_60648ef90eed.slice/crio-1f08d7c46d8522c79cb9dde4f96d1b85293ed8ede84fbfc7d9fca7350f6beb80 WatchSource:0}: Error finding container 1f08d7c46d8522c79cb9dde4f96d1b85293ed8ede84fbfc7d9fca7350f6beb80: Status 404 returned error can't find the container with id 1f08d7c46d8522c79cb9dde4f96d1b85293ed8ede84fbfc7d9fca7350f6beb80 Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.088491 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.147958 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.149822 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35e5429-427c-4f54-ae6d-60648ef90eed","Type":"ContainerStarted","Data":"1f08d7c46d8522c79cb9dde4f96d1b85293ed8ede84fbfc7d9fca7350f6beb80"} Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.174402 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.322535 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vbj27"] Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.324232 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.326389 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.326615 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.334299 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vbj27"] Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.449543 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.449626 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbf85\" (UniqueName: \"kubernetes.io/projected/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-kube-api-access-nbf85\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.449726 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-scripts\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.449996 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-config-data\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.551729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-config-data\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.551824 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.551901 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbf85\" (UniqueName: \"kubernetes.io/projected/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-kube-api-access-nbf85\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.551960 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-scripts\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.555381 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-scripts\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.557802 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-config-data\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.558216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.570991 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbf85\" (UniqueName: \"kubernetes.io/projected/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-kube-api-access-nbf85\") pod \"nova-cell1-cell-mapping-vbj27\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:26 crc kubenswrapper[4820]: I0201 14:40:26.736778 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:27 crc kubenswrapper[4820]: I0201 14:40:27.167510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerStarted","Data":"cb169db8a65e1c2d5be7a7a22a3fa4c091eaecfa985f45b5fefcf1d095f29214"} Feb 01 14:40:27 crc kubenswrapper[4820]: I0201 14:40:27.167813 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerStarted","Data":"f830af1f15eddc29f21ddcdbc6efaedde9e5379b89a3e7c475d2c478c910d729"} Feb 01 14:40:27 crc kubenswrapper[4820]: I0201 14:40:27.169484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35e5429-427c-4f54-ae6d-60648ef90eed","Type":"ContainerStarted","Data":"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09"} Feb 01 14:40:27 crc kubenswrapper[4820]: I0201 14:40:27.169510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35e5429-427c-4f54-ae6d-60648ef90eed","Type":"ContainerStarted","Data":"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab"} Feb 01 14:40:27 crc kubenswrapper[4820]: I0201 14:40:27.211771 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.211752041 podStartE2EDuration="2.211752041s" podCreationTimestamp="2026-02-01 14:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:40:27.198404205 +0000 UTC m=+1168.718770499" watchObservedRunningTime="2026-02-01 14:40:27.211752041 +0000 UTC m=+1168.732118325" Feb 01 14:40:27 crc kubenswrapper[4820]: I0201 14:40:27.212795 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36042293-d59a-4aab-b851-e06233b41191" path="/var/lib/kubelet/pods/36042293-d59a-4aab-b851-e06233b41191/volumes" Feb 01 14:40:27 crc kubenswrapper[4820]: I0201 14:40:27.214046 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vbj27"] Feb 01 14:40:28 crc kubenswrapper[4820]: I0201 14:40:28.179475 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vbj27" event={"ID":"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de","Type":"ContainerStarted","Data":"76dfe0b4b7a5a7d86c565f56ca18fcd30a6dfa5626b5a9dffc541bea05e4ee74"} Feb 01 14:40:28 crc kubenswrapper[4820]: I0201 14:40:28.179858 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vbj27" event={"ID":"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de","Type":"ContainerStarted","Data":"ac48aecb9b2299898450f96e5ca7bb7a66da4d46c8a48973b431e44f3ea0d8d4"} Feb 01 14:40:28 crc kubenswrapper[4820]: I0201 14:40:28.181803 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerStarted","Data":"eb0021f9063cb73a53bb654661860c3a3b3327a7cda81647f212d0971fdf38df"} Feb 01 14:40:28 crc kubenswrapper[4820]: I0201 14:40:28.181833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerStarted","Data":"797ad3675213eb9431dd3ac3e9c01b54dfaa2c9921a88da45c913b78f6da7336"} Feb 01 14:40:28 crc kubenswrapper[4820]: I0201 14:40:28.196707 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vbj27" podStartSLOduration=2.196691364 podStartE2EDuration="2.196691364s" podCreationTimestamp="2026-02-01 14:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:40:28.19569956 +0000 UTC m=+1169.716065844" watchObservedRunningTime="2026-02-01 14:40:28.196691364 +0000 UTC m=+1169.717057648" Feb 01 14:40:28 crc kubenswrapper[4820]: I0201 14:40:28.566584 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:40:28 crc kubenswrapper[4820]: I0201 14:40:28.635854 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-ltqrv"] Feb 01 14:40:28 crc kubenswrapper[4820]: I0201 14:40:28.636278 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" podUID="c4a92823-3b74-4ef8-8104-b655c13d44ee" containerName="dnsmasq-dns" containerID="cri-o://5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a" gracePeriod=10 Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.173729 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.197699 4820 generic.go:334] "Generic (PLEG): container finished" podID="c4a92823-3b74-4ef8-8104-b655c13d44ee" containerID="5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a" exitCode=0 Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.197849 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.228948 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" event={"ID":"c4a92823-3b74-4ef8-8104-b655c13d44ee","Type":"ContainerDied","Data":"5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a"} Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.230108 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-ltqrv" event={"ID":"c4a92823-3b74-4ef8-8104-b655c13d44ee","Type":"ContainerDied","Data":"884e14265ec592521c9cf96bf0e0633533866ad54bc8d03296ba1d4831d18e42"} Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.230246 4820 scope.go:117] "RemoveContainer" containerID="5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.258913 4820 scope.go:117] "RemoveContainer" containerID="7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.288775 4820 scope.go:117] "RemoveContainer" containerID="5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a" Feb 01 14:40:29 crc kubenswrapper[4820]: E0201 14:40:29.289208 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a\": container with ID starting with 5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a not found: ID does not exist" containerID="5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.289242 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a"} err="failed to get container status \"5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a\": rpc error: code = NotFound desc = could not find container \"5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a\": container with ID starting with 5d5812a21e0848264db65e67fe20cc5335bb6c4d89904afdae3752c234aa757a not found: ID does not exist" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.289265 4820 scope.go:117] "RemoveContainer" containerID="7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b" Feb 01 14:40:29 crc kubenswrapper[4820]: E0201 14:40:29.289651 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b\": container with ID starting with 7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b not found: ID does not exist" containerID="7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.289674 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b"} err="failed to get container status \"7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b\": rpc error: code = NotFound desc = could not find container \"7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b\": container with ID starting with 7c6b1fe6696b056aff48ef1bc0584e51449ed8d049cd0aac799194c9a88afd0b not found: ID does not exist" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.316432 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-config\") pod \"c4a92823-3b74-4ef8-8104-b655c13d44ee\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.316514 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-sb\") pod \"c4a92823-3b74-4ef8-8104-b655c13d44ee\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.316583 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-nb\") pod \"c4a92823-3b74-4ef8-8104-b655c13d44ee\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.316622 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-dns-svc\") pod \"c4a92823-3b74-4ef8-8104-b655c13d44ee\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.316673 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkwfx\" (UniqueName: \"kubernetes.io/projected/c4a92823-3b74-4ef8-8104-b655c13d44ee-kube-api-access-tkwfx\") pod \"c4a92823-3b74-4ef8-8104-b655c13d44ee\" (UID: \"c4a92823-3b74-4ef8-8104-b655c13d44ee\") " Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.329122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a92823-3b74-4ef8-8104-b655c13d44ee-kube-api-access-tkwfx" (OuterVolumeSpecName: "kube-api-access-tkwfx") pod "c4a92823-3b74-4ef8-8104-b655c13d44ee" (UID: "c4a92823-3b74-4ef8-8104-b655c13d44ee"). InnerVolumeSpecName "kube-api-access-tkwfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.370207 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4a92823-3b74-4ef8-8104-b655c13d44ee" (UID: "c4a92823-3b74-4ef8-8104-b655c13d44ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.371089 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4a92823-3b74-4ef8-8104-b655c13d44ee" (UID: "c4a92823-3b74-4ef8-8104-b655c13d44ee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.375961 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4a92823-3b74-4ef8-8104-b655c13d44ee" (UID: "c4a92823-3b74-4ef8-8104-b655c13d44ee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.383579 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-config" (OuterVolumeSpecName: "config") pod "c4a92823-3b74-4ef8-8104-b655c13d44ee" (UID: "c4a92823-3b74-4ef8-8104-b655c13d44ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.432396 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.432463 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.432489 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.432514 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4a92823-3b74-4ef8-8104-b655c13d44ee-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.432529 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkwfx\" (UniqueName: \"kubernetes.io/projected/c4a92823-3b74-4ef8-8104-b655c13d44ee-kube-api-access-tkwfx\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.531941 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-ltqrv"] Feb 01 14:40:29 crc kubenswrapper[4820]: I0201 14:40:29.541484 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-ltqrv"] Feb 01 14:40:31 crc kubenswrapper[4820]: I0201 14:40:31.214369 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a92823-3b74-4ef8-8104-b655c13d44ee" path="/var/lib/kubelet/pods/c4a92823-3b74-4ef8-8104-b655c13d44ee/volumes" Feb 01 14:40:31 crc kubenswrapper[4820]: I0201 14:40:31.232705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerStarted","Data":"7577ed8540d975f423976305bb3705f4eafd461274edd3901d23c43bda9c9e8d"} Feb 01 14:40:31 crc kubenswrapper[4820]: I0201 14:40:31.233132 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 01 14:40:32 crc kubenswrapper[4820]: I0201 14:40:32.245374 4820 generic.go:334] "Generic (PLEG): container finished" podID="b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" containerID="76dfe0b4b7a5a7d86c565f56ca18fcd30a6dfa5626b5a9dffc541bea05e4ee74" exitCode=0 Feb 01 14:40:32 crc kubenswrapper[4820]: I0201 14:40:32.247759 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vbj27" event={"ID":"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de","Type":"ContainerDied","Data":"76dfe0b4b7a5a7d86c565f56ca18fcd30a6dfa5626b5a9dffc541bea05e4ee74"} Feb 01 14:40:32 crc kubenswrapper[4820]: I0201 14:40:32.274901 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.080482791 podStartE2EDuration="7.274865398s" podCreationTimestamp="2026-02-01 14:40:25 +0000 UTC" firstStartedPulling="2026-02-01 14:40:26.154735886 +0000 UTC m=+1167.675102170" lastFinishedPulling="2026-02-01 14:40:30.349118503 +0000 UTC m=+1171.869484777" observedRunningTime="2026-02-01 14:40:31.257294768 +0000 UTC m=+1172.777661052" watchObservedRunningTime="2026-02-01 14:40:32.274865398 +0000 UTC m=+1173.795231682" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.703311 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.817404 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-config-data\") pod \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.817858 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-combined-ca-bundle\") pod \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.817955 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbf85\" (UniqueName: \"kubernetes.io/projected/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-kube-api-access-nbf85\") pod \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.818121 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-scripts\") pod \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\" (UID: \"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de\") " Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.823175 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-kube-api-access-nbf85" (OuterVolumeSpecName: "kube-api-access-nbf85") pod "b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" (UID: "b5964c28-fda2-4e6d-85cf-59e7bf1ec9de"). InnerVolumeSpecName "kube-api-access-nbf85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.824809 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-scripts" (OuterVolumeSpecName: "scripts") pod "b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" (UID: "b5964c28-fda2-4e6d-85cf-59e7bf1ec9de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.845271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-config-data" (OuterVolumeSpecName: "config-data") pod "b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" (UID: "b5964c28-fda2-4e6d-85cf-59e7bf1ec9de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.870795 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" (UID: "b5964c28-fda2-4e6d-85cf-59e7bf1ec9de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.921475 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.921501 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbf85\" (UniqueName: \"kubernetes.io/projected/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-kube-api-access-nbf85\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.921512 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:33 crc kubenswrapper[4820]: I0201 14:40:33.921521 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.275696 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vbj27" event={"ID":"b5964c28-fda2-4e6d-85cf-59e7bf1ec9de","Type":"ContainerDied","Data":"ac48aecb9b2299898450f96e5ca7bb7a66da4d46c8a48973b431e44f3ea0d8d4"} Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.275762 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac48aecb9b2299898450f96e5ca7bb7a66da4d46c8a48973b431e44f3ea0d8d4" Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.275860 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vbj27" Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.463371 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.463660 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerName="nova-api-log" containerID="cri-o://a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab" gracePeriod=30 Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.463719 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerName="nova-api-api" containerID="cri-o://e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09" gracePeriod=30 Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.490240 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.490468 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-log" containerID="cri-o://f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c" gracePeriod=30 Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.490531 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-metadata" containerID="cri-o://be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b" gracePeriod=30 Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.515439 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:40:34 crc kubenswrapper[4820]: I0201 14:40:34.515679 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4102147a-54c1-4f63-8f88-92e634a6db94" containerName="nova-scheduler-scheduler" containerID="cri-o://a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37" gracePeriod=30 Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.072269 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.143412 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35e5429-427c-4f54-ae6d-60648ef90eed-logs\") pod \"a35e5429-427c-4f54-ae6d-60648ef90eed\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.143473 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-combined-ca-bundle\") pod \"a35e5429-427c-4f54-ae6d-60648ef90eed\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.143520 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-config-data\") pod \"a35e5429-427c-4f54-ae6d-60648ef90eed\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.143580 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwkxm\" (UniqueName: \"kubernetes.io/projected/a35e5429-427c-4f54-ae6d-60648ef90eed-kube-api-access-wwkxm\") pod \"a35e5429-427c-4f54-ae6d-60648ef90eed\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.143691 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-internal-tls-certs\") pod \"a35e5429-427c-4f54-ae6d-60648ef90eed\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.143753 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-public-tls-certs\") pod \"a35e5429-427c-4f54-ae6d-60648ef90eed\" (UID: \"a35e5429-427c-4f54-ae6d-60648ef90eed\") " Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.143745 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a35e5429-427c-4f54-ae6d-60648ef90eed-logs" (OuterVolumeSpecName: "logs") pod "a35e5429-427c-4f54-ae6d-60648ef90eed" (UID: "a35e5429-427c-4f54-ae6d-60648ef90eed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.144411 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35e5429-427c-4f54-ae6d-60648ef90eed-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.148278 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35e5429-427c-4f54-ae6d-60648ef90eed-kube-api-access-wwkxm" (OuterVolumeSpecName: "kube-api-access-wwkxm") pod "a35e5429-427c-4f54-ae6d-60648ef90eed" (UID: "a35e5429-427c-4f54-ae6d-60648ef90eed"). InnerVolumeSpecName "kube-api-access-wwkxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.168810 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-config-data" (OuterVolumeSpecName: "config-data") pod "a35e5429-427c-4f54-ae6d-60648ef90eed" (UID: "a35e5429-427c-4f54-ae6d-60648ef90eed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.169284 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a35e5429-427c-4f54-ae6d-60648ef90eed" (UID: "a35e5429-427c-4f54-ae6d-60648ef90eed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.190377 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a35e5429-427c-4f54-ae6d-60648ef90eed" (UID: "a35e5429-427c-4f54-ae6d-60648ef90eed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.193394 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a35e5429-427c-4f54-ae6d-60648ef90eed" (UID: "a35e5429-427c-4f54-ae6d-60648ef90eed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.245930 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.245961 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.245996 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwkxm\" (UniqueName: \"kubernetes.io/projected/a35e5429-427c-4f54-ae6d-60648ef90eed-kube-api-access-wwkxm\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.246006 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.246015 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a35e5429-427c-4f54-ae6d-60648ef90eed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.285861 4820 generic.go:334] "Generic (PLEG): container finished" podID="ae340692-583b-402a-8638-a9d9d9442c08" containerID="f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c" exitCode=143 Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.285943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae340692-583b-402a-8638-a9d9d9442c08","Type":"ContainerDied","Data":"f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c"} Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.289981 4820 generic.go:334] "Generic (PLEG): container finished" podID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerID="e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09" exitCode=0 Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.290012 4820 generic.go:334] "Generic (PLEG): container finished" podID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerID="a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab" exitCode=143 Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.290057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35e5429-427c-4f54-ae6d-60648ef90eed","Type":"ContainerDied","Data":"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09"} Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.290082 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35e5429-427c-4f54-ae6d-60648ef90eed","Type":"ContainerDied","Data":"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab"} Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.290070 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.290121 4820 scope.go:117] "RemoveContainer" containerID="e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.290091 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35e5429-427c-4f54-ae6d-60648ef90eed","Type":"ContainerDied","Data":"1f08d7c46d8522c79cb9dde4f96d1b85293ed8ede84fbfc7d9fca7350f6beb80"} Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.316081 4820 scope.go:117] "RemoveContainer" containerID="a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.317009 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.338536 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.340948 4820 scope.go:117] "RemoveContainer" containerID="e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09" Feb 01 14:40:35 crc kubenswrapper[4820]: E0201 14:40:35.341456 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09\": container with ID starting with e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09 not found: ID does not exist" containerID="e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.341487 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09"} err="failed to get container status \"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09\": rpc error: code = NotFound desc = could not find container \"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09\": container with ID starting with e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09 not found: ID does not exist" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.341507 4820 scope.go:117] "RemoveContainer" containerID="a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab" Feb 01 14:40:35 crc kubenswrapper[4820]: E0201 14:40:35.341966 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab\": container with ID starting with a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab not found: ID does not exist" containerID="a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.342006 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab"} err="failed to get container status \"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab\": rpc error: code = NotFound desc = could not find container \"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab\": container with ID starting with a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab not found: ID does not exist" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.342041 4820 scope.go:117] "RemoveContainer" containerID="e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.342322 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09"} err="failed to get container status \"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09\": rpc error: code = NotFound desc = could not find container \"e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09\": container with ID starting with e95743d7939aaef69e4986d56ea9fa2a3e829b23d1a2970309c8d71f4ea72a09 not found: ID does not exist" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.342343 4820 scope.go:117] "RemoveContainer" containerID="a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.342592 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab"} err="failed to get container status \"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab\": rpc error: code = NotFound desc = could not find container \"a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab\": container with ID starting with a1470159fcebf72e5ac0afbf06d350476b53fa7299e38b9c9fe13eec786abfab not found: ID does not exist" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.352892 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:35 crc kubenswrapper[4820]: E0201 14:40:35.353418 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" containerName="nova-manage" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.353439 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" containerName="nova-manage" Feb 01 14:40:35 crc kubenswrapper[4820]: E0201 14:40:35.353457 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerName="nova-api-api" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.353465 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerName="nova-api-api" Feb 01 14:40:35 crc kubenswrapper[4820]: E0201 14:40:35.353513 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a92823-3b74-4ef8-8104-b655c13d44ee" containerName="init" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.353521 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a92823-3b74-4ef8-8104-b655c13d44ee" containerName="init" Feb 01 14:40:35 crc kubenswrapper[4820]: E0201 14:40:35.353547 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerName="nova-api-log" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.353580 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerName="nova-api-log" Feb 01 14:40:35 crc kubenswrapper[4820]: E0201 14:40:35.353595 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a92823-3b74-4ef8-8104-b655c13d44ee" containerName="dnsmasq-dns" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.353603 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a92823-3b74-4ef8-8104-b655c13d44ee" containerName="dnsmasq-dns" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.354025 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerName="nova-api-api" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.354079 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" containerName="nova-manage" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.354110 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a92823-3b74-4ef8-8104-b655c13d44ee" containerName="dnsmasq-dns" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.354150 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" containerName="nova-api-log" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.358554 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.360383 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.360659 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.360862 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.361770 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.449673 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-logs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.449814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.449903 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-config-data\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.449929 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.449963 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wcnn\" (UniqueName: \"kubernetes.io/projected/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-kube-api-access-4wcnn\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.450286 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-public-tls-certs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.551762 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-public-tls-certs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.552104 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-logs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.552131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.552150 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-config-data\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.552169 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.552194 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wcnn\" (UniqueName: \"kubernetes.io/projected/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-kube-api-access-4wcnn\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.552629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-logs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.555591 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.555934 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-config-data\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.556204 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.556514 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-public-tls-certs\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.568859 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wcnn\" (UniqueName: \"kubernetes.io/projected/2fd1b280-fb87-44c5-ab0e-fff3fedfff7d-kube-api-access-4wcnn\") pod \"nova-api-0\" (UID: \"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d\") " pod="openstack/nova-api-0" Feb 01 14:40:35 crc kubenswrapper[4820]: I0201 14:40:35.673397 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 01 14:40:36 crc kubenswrapper[4820]: I0201 14:40:36.284047 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 01 14:40:36 crc kubenswrapper[4820]: I0201 14:40:36.308290 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d","Type":"ContainerStarted","Data":"4982b584beba02acb60896df6cc12cfe12a0cd622476f05eab0aa7131e1058b1"} Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.210052 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35e5429-427c-4f54-ae6d-60648ef90eed" path="/var/lib/kubelet/pods/a35e5429-427c-4f54-ae6d-60648ef90eed/volumes" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.309932 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.325561 4820 generic.go:334] "Generic (PLEG): container finished" podID="4102147a-54c1-4f63-8f88-92e634a6db94" containerID="a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37" exitCode=0 Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.325643 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.325657 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4102147a-54c1-4f63-8f88-92e634a6db94","Type":"ContainerDied","Data":"a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37"} Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.325832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4102147a-54c1-4f63-8f88-92e634a6db94","Type":"ContainerDied","Data":"8501834a38075b2aaefb385f30241c3e9583eeaa70397b535e1049846274a48c"} Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.325854 4820 scope.go:117] "RemoveContainer" containerID="a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.332048 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d","Type":"ContainerStarted","Data":"29f878c6af0c19b9d4fab6062aa33e041fe8c393e83d2fceb5c701ac73a05c19"} Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.332135 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fd1b280-fb87-44c5-ab0e-fff3fedfff7d","Type":"ContainerStarted","Data":"6277dec123afb6e7a41001ac8f9c67c75fa63bb4bc9ad1db4b50809943a120f2"} Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.371776 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.371751142 podStartE2EDuration="2.371751142s" podCreationTimestamp="2026-02-01 14:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:40:37.358818315 +0000 UTC m=+1178.879184609" watchObservedRunningTime="2026-02-01 14:40:37.371751142 +0000 UTC m=+1178.892117426" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.378311 4820 scope.go:117] "RemoveContainer" containerID="a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37" Feb 01 14:40:37 crc kubenswrapper[4820]: E0201 14:40:37.378691 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37\": container with ID starting with a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37 not found: ID does not exist" containerID="a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.378780 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37"} err="failed to get container status \"a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37\": rpc error: code = NotFound desc = could not find container \"a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37\": container with ID starting with a6c6a76668d869fdf947866aeda672aa3e2334b0330360ddbe7c881662b21a37 not found: ID does not exist" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.388802 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrd55\" (UniqueName: \"kubernetes.io/projected/4102147a-54c1-4f63-8f88-92e634a6db94-kube-api-access-jrd55\") pod \"4102147a-54c1-4f63-8f88-92e634a6db94\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.388970 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-config-data\") pod \"4102147a-54c1-4f63-8f88-92e634a6db94\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.389038 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-combined-ca-bundle\") pod \"4102147a-54c1-4f63-8f88-92e634a6db94\" (UID: \"4102147a-54c1-4f63-8f88-92e634a6db94\") " Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.417813 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4102147a-54c1-4f63-8f88-92e634a6db94-kube-api-access-jrd55" (OuterVolumeSpecName: "kube-api-access-jrd55") pod "4102147a-54c1-4f63-8f88-92e634a6db94" (UID: "4102147a-54c1-4f63-8f88-92e634a6db94"). InnerVolumeSpecName "kube-api-access-jrd55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.426160 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-config-data" (OuterVolumeSpecName: "config-data") pod "4102147a-54c1-4f63-8f88-92e634a6db94" (UID: "4102147a-54c1-4f63-8f88-92e634a6db94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.427833 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4102147a-54c1-4f63-8f88-92e634a6db94" (UID: "4102147a-54c1-4f63-8f88-92e634a6db94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.491106 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrd55\" (UniqueName: \"kubernetes.io/projected/4102147a-54c1-4f63-8f88-92e634a6db94-kube-api-access-jrd55\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.491241 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.491413 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4102147a-54c1-4f63-8f88-92e634a6db94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.628803 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:33850->10.217.0.178:8775: read: connection reset by peer" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.628852 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:33866->10.217.0.178:8775: read: connection reset by peer" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.805982 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.828747 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.843306 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:40:37 crc kubenswrapper[4820]: E0201 14:40:37.843666 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4102147a-54c1-4f63-8f88-92e634a6db94" containerName="nova-scheduler-scheduler" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.843682 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4102147a-54c1-4f63-8f88-92e634a6db94" containerName="nova-scheduler-scheduler" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.843854 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4102147a-54c1-4f63-8f88-92e634a6db94" containerName="nova-scheduler-scheduler" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.844451 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.848510 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 01 14:40:37 crc kubenswrapper[4820]: I0201 14:40:37.854530 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.000122 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-config-data\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.000178 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wzqj\" (UniqueName: \"kubernetes.io/projected/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-kube-api-access-6wzqj\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.000284 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.101852 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-config-data\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.102322 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wzqj\" (UniqueName: \"kubernetes.io/projected/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-kube-api-access-6wzqj\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.102598 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.108678 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-config-data\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.109012 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.119637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wzqj\" (UniqueName: \"kubernetes.io/projected/36f2dcef-35d9-4ef2-b4b8-afd55a7683c5-kube-api-access-6wzqj\") pod \"nova-scheduler-0\" (UID: \"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5\") " pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.174508 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.189630 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.304614 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-nova-metadata-tls-certs\") pod \"ae340692-583b-402a-8638-a9d9d9442c08\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.304742 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae340692-583b-402a-8638-a9d9d9442c08-logs\") pod \"ae340692-583b-402a-8638-a9d9d9442c08\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.304803 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-config-data\") pod \"ae340692-583b-402a-8638-a9d9d9442c08\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.304909 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-combined-ca-bundle\") pod \"ae340692-583b-402a-8638-a9d9d9442c08\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.305001 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vgvr\" (UniqueName: \"kubernetes.io/projected/ae340692-583b-402a-8638-a9d9d9442c08-kube-api-access-2vgvr\") pod \"ae340692-583b-402a-8638-a9d9d9442c08\" (UID: \"ae340692-583b-402a-8638-a9d9d9442c08\") " Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.306046 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae340692-583b-402a-8638-a9d9d9442c08-logs" (OuterVolumeSpecName: "logs") pod "ae340692-583b-402a-8638-a9d9d9442c08" (UID: "ae340692-583b-402a-8638-a9d9d9442c08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.311510 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae340692-583b-402a-8638-a9d9d9442c08-kube-api-access-2vgvr" (OuterVolumeSpecName: "kube-api-access-2vgvr") pod "ae340692-583b-402a-8638-a9d9d9442c08" (UID: "ae340692-583b-402a-8638-a9d9d9442c08"). InnerVolumeSpecName "kube-api-access-2vgvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.336196 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae340692-583b-402a-8638-a9d9d9442c08" (UID: "ae340692-583b-402a-8638-a9d9d9442c08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.336219 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-config-data" (OuterVolumeSpecName: "config-data") pod "ae340692-583b-402a-8638-a9d9d9442c08" (UID: "ae340692-583b-402a-8638-a9d9d9442c08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.349458 4820 generic.go:334] "Generic (PLEG): container finished" podID="ae340692-583b-402a-8638-a9d9d9442c08" containerID="be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b" exitCode=0 Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.349552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae340692-583b-402a-8638-a9d9d9442c08","Type":"ContainerDied","Data":"be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b"} Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.349587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae340692-583b-402a-8638-a9d9d9442c08","Type":"ContainerDied","Data":"dc6209d89916ae974223d9762d565dbddc04196ccbffa4293552161305b81a1f"} Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.349607 4820 scope.go:117] "RemoveContainer" containerID="be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.349734 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.377096 4820 scope.go:117] "RemoveContainer" containerID="f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.402125 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ae340692-583b-402a-8638-a9d9d9442c08" (UID: "ae340692-583b-402a-8638-a9d9d9442c08"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.406141 4820 scope.go:117] "RemoveContainer" containerID="be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b" Feb 01 14:40:38 crc kubenswrapper[4820]: E0201 14:40:38.408543 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b\": container with ID starting with be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b not found: ID does not exist" containerID="be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.408593 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b"} err="failed to get container status \"be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b\": rpc error: code = NotFound desc = could not find container \"be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b\": container with ID starting with be62e9dfc6b08cddab07e43fd5275fc416cebb1ce0a01922d8c35357dc903a3b not found: ID does not exist" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.408620 4820 scope.go:117] "RemoveContainer" containerID="f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.408983 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vgvr\" (UniqueName: \"kubernetes.io/projected/ae340692-583b-402a-8638-a9d9d9442c08-kube-api-access-2vgvr\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.409005 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.409017 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae340692-583b-402a-8638-a9d9d9442c08-logs\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.409030 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.409041 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae340692-583b-402a-8638-a9d9d9442c08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:38 crc kubenswrapper[4820]: E0201 14:40:38.409228 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c\": container with ID starting with f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c not found: ID does not exist" containerID="f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.409280 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c"} err="failed to get container status \"f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c\": rpc error: code = NotFound desc = could not find container \"f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c\": container with ID starting with f1191aa60657e9f0c8ee307db774f0f0363c0c81819461406081c737a591c58c not found: ID does not exist" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.662919 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 01 14:40:38 crc kubenswrapper[4820]: W0201 14:40:38.665528 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36f2dcef_35d9_4ef2_b4b8_afd55a7683c5.slice/crio-29db3b92b1c630f07e5e9c19d523643fb1d4cd38851b6e17021f8a761825efb3 WatchSource:0}: Error finding container 29db3b92b1c630f07e5e9c19d523643fb1d4cd38851b6e17021f8a761825efb3: Status 404 returned error can't find the container with id 29db3b92b1c630f07e5e9c19d523643fb1d4cd38851b6e17021f8a761825efb3 Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.802274 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.819905 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.836030 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:40:38 crc kubenswrapper[4820]: E0201 14:40:38.836544 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-metadata" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.836568 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-metadata" Feb 01 14:40:38 crc kubenswrapper[4820]: E0201 14:40:38.836596 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-log" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.836606 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-log" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.836862 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-log" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.836913 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae340692-583b-402a-8638-a9d9d9442c08" containerName="nova-metadata-metadata" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.838197 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.841282 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.842965 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.877378 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.918418 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-config-data\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.918813 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d962ec-13d6-4839-be99-72ecf5dd3980-logs\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.918852 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.918895 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:38 crc kubenswrapper[4820]: I0201 14:40:38.918957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klq6d\" (UniqueName: \"kubernetes.io/projected/92d962ec-13d6-4839-be99-72ecf5dd3980-kube-api-access-klq6d\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.020522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d962ec-13d6-4839-be99-72ecf5dd3980-logs\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.020582 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.020605 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.020670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klq6d\" (UniqueName: \"kubernetes.io/projected/92d962ec-13d6-4839-be99-72ecf5dd3980-kube-api-access-klq6d\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.020786 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-config-data\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.020952 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d962ec-13d6-4839-be99-72ecf5dd3980-logs\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.025676 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-config-data\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.026044 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.026215 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d962ec-13d6-4839-be99-72ecf5dd3980-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.035767 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klq6d\" (UniqueName: \"kubernetes.io/projected/92d962ec-13d6-4839-be99-72ecf5dd3980-kube-api-access-klq6d\") pod \"nova-metadata-0\" (UID: \"92d962ec-13d6-4839-be99-72ecf5dd3980\") " pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.178164 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.209379 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4102147a-54c1-4f63-8f88-92e634a6db94" path="/var/lib/kubelet/pods/4102147a-54c1-4f63-8f88-92e634a6db94/volumes" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.210440 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae340692-583b-402a-8638-a9d9d9442c08" path="/var/lib/kubelet/pods/ae340692-583b-402a-8638-a9d9d9442c08/volumes" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.362046 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5","Type":"ContainerStarted","Data":"05ba8e1dff37d4ba5bbbc5c1c9c454893b2aaa1c62be312d94ddf14d9b5a58aa"} Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.362137 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36f2dcef-35d9-4ef2-b4b8-afd55a7683c5","Type":"ContainerStarted","Data":"29db3b92b1c630f07e5e9c19d523643fb1d4cd38851b6e17021f8a761825efb3"} Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.385344 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.385309425 podStartE2EDuration="2.385309425s" podCreationTimestamp="2026-02-01 14:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:40:39.379410331 +0000 UTC m=+1180.899776615" watchObservedRunningTime="2026-02-01 14:40:39.385309425 +0000 UTC m=+1180.905675699" Feb 01 14:40:39 crc kubenswrapper[4820]: I0201 14:40:39.620165 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 01 14:40:39 crc kubenswrapper[4820]: W0201 14:40:39.623010 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92d962ec_13d6_4839_be99_72ecf5dd3980.slice/crio-32d121692642d5596fd030c3a7c0fe3104631b20a187519b5eb9c737fbd8147c WatchSource:0}: Error finding container 32d121692642d5596fd030c3a7c0fe3104631b20a187519b5eb9c737fbd8147c: Status 404 returned error can't find the container with id 32d121692642d5596fd030c3a7c0fe3104631b20a187519b5eb9c737fbd8147c Feb 01 14:40:40 crc kubenswrapper[4820]: I0201 14:40:40.374477 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d962ec-13d6-4839-be99-72ecf5dd3980","Type":"ContainerStarted","Data":"8623f222e8baa9f77598934a875a9fdad3ecf1618433b613649f3a1080984363"} Feb 01 14:40:40 crc kubenswrapper[4820]: I0201 14:40:40.374839 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d962ec-13d6-4839-be99-72ecf5dd3980","Type":"ContainerStarted","Data":"44a8e2be8760f3e55d87cc2388dac4d288b773b4cf4287707f29f1eaf741fa73"} Feb 01 14:40:40 crc kubenswrapper[4820]: I0201 14:40:40.374851 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d962ec-13d6-4839-be99-72ecf5dd3980","Type":"ContainerStarted","Data":"32d121692642d5596fd030c3a7c0fe3104631b20a187519b5eb9c737fbd8147c"} Feb 01 14:40:40 crc kubenswrapper[4820]: I0201 14:40:40.401829 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.401805429 podStartE2EDuration="2.401805429s" podCreationTimestamp="2026-02-01 14:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:40:40.393378563 +0000 UTC m=+1181.913744847" watchObservedRunningTime="2026-02-01 14:40:40.401805429 +0000 UTC m=+1181.922171713" Feb 01 14:40:43 crc kubenswrapper[4820]: I0201 14:40:43.175162 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 01 14:40:44 crc kubenswrapper[4820]: I0201 14:40:44.179002 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 01 14:40:44 crc kubenswrapper[4820]: I0201 14:40:44.179051 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 01 14:40:45 crc kubenswrapper[4820]: I0201 14:40:45.674042 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 01 14:40:45 crc kubenswrapper[4820]: I0201 14:40:45.674281 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 01 14:40:46 crc kubenswrapper[4820]: I0201 14:40:46.692201 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2fd1b280-fb87-44c5-ab0e-fff3fedfff7d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 01 14:40:46 crc kubenswrapper[4820]: I0201 14:40:46.693253 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2fd1b280-fb87-44c5-ab0e-fff3fedfff7d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 01 14:40:48 crc kubenswrapper[4820]: I0201 14:40:48.175193 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 01 14:40:48 crc kubenswrapper[4820]: I0201 14:40:48.203765 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 01 14:40:48 crc kubenswrapper[4820]: I0201 14:40:48.479373 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 01 14:40:49 crc kubenswrapper[4820]: I0201 14:40:49.180482 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 01 14:40:49 crc kubenswrapper[4820]: I0201 14:40:49.180538 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 01 14:40:50 crc kubenswrapper[4820]: I0201 14:40:50.189018 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92d962ec-13d6-4839-be99-72ecf5dd3980" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 01 14:40:50 crc kubenswrapper[4820]: I0201 14:40:50.189018 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92d962ec-13d6-4839-be99-72ecf5dd3980" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 01 14:40:55 crc kubenswrapper[4820]: I0201 14:40:55.602746 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 01 14:40:55 crc kubenswrapper[4820]: I0201 14:40:55.691567 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 01 14:40:55 crc kubenswrapper[4820]: I0201 14:40:55.692369 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 01 14:40:55 crc kubenswrapper[4820]: I0201 14:40:55.701863 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 01 14:40:55 crc kubenswrapper[4820]: I0201 14:40:55.706519 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 01 14:40:56 crc kubenswrapper[4820]: I0201 14:40:56.526557 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 01 14:40:56 crc kubenswrapper[4820]: I0201 14:40:56.539030 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 01 14:40:58 crc kubenswrapper[4820]: I0201 14:40:58.271685 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:40:58 crc kubenswrapper[4820]: I0201 14:40:58.272585 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="834dff46-eb6f-4646-830a-a665bcd1461b" containerName="kube-state-metrics" containerID="cri-o://ba5dc266666ca090cdd8d6040e3399624af7fc09e03a870c966dd4c4e316dd5e" gracePeriod=30 Feb 01 14:40:58 crc kubenswrapper[4820]: I0201 14:40:58.545968 4820 generic.go:334] "Generic (PLEG): container finished" podID="834dff46-eb6f-4646-830a-a665bcd1461b" containerID="ba5dc266666ca090cdd8d6040e3399624af7fc09e03a870c966dd4c4e316dd5e" exitCode=2 Feb 01 14:40:58 crc kubenswrapper[4820]: I0201 14:40:58.546057 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"834dff46-eb6f-4646-830a-a665bcd1461b","Type":"ContainerDied","Data":"ba5dc266666ca090cdd8d6040e3399624af7fc09e03a870c966dd4c4e316dd5e"} Feb 01 14:40:58 crc kubenswrapper[4820]: I0201 14:40:58.714387 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 01 14:40:58 crc kubenswrapper[4820]: I0201 14:40:58.727564 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srtkx\" (UniqueName: \"kubernetes.io/projected/834dff46-eb6f-4646-830a-a665bcd1461b-kube-api-access-srtkx\") pod \"834dff46-eb6f-4646-830a-a665bcd1461b\" (UID: \"834dff46-eb6f-4646-830a-a665bcd1461b\") " Feb 01 14:40:58 crc kubenswrapper[4820]: I0201 14:40:58.732594 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834dff46-eb6f-4646-830a-a665bcd1461b-kube-api-access-srtkx" (OuterVolumeSpecName: "kube-api-access-srtkx") pod "834dff46-eb6f-4646-830a-a665bcd1461b" (UID: "834dff46-eb6f-4646-830a-a665bcd1461b"). InnerVolumeSpecName "kube-api-access-srtkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:40:58 crc kubenswrapper[4820]: I0201 14:40:58.830297 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srtkx\" (UniqueName: \"kubernetes.io/projected/834dff46-eb6f-4646-830a-a665bcd1461b-kube-api-access-srtkx\") on node \"crc\" DevicePath \"\"" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.209175 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.209450 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.214329 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.224114 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.260049 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.260399 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="ceilometer-central-agent" containerID="cri-o://cb169db8a65e1c2d5be7a7a22a3fa4c091eaecfa985f45b5fefcf1d095f29214" gracePeriod=30 Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.260446 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="proxy-httpd" containerID="cri-o://7577ed8540d975f423976305bb3705f4eafd461274edd3901d23c43bda9c9e8d" gracePeriod=30 Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.260470 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="sg-core" containerID="cri-o://eb0021f9063cb73a53bb654661860c3a3b3327a7cda81647f212d0971fdf38df" gracePeriod=30 Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.260525 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="ceilometer-notification-agent" containerID="cri-o://797ad3675213eb9431dd3ac3e9c01b54dfaa2c9921a88da45c913b78f6da7336" gracePeriod=30 Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.557590 4820 generic.go:334] "Generic (PLEG): container finished" podID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerID="7577ed8540d975f423976305bb3705f4eafd461274edd3901d23c43bda9c9e8d" exitCode=0 Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.557626 4820 generic.go:334] "Generic (PLEG): container finished" podID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerID="eb0021f9063cb73a53bb654661860c3a3b3327a7cda81647f212d0971fdf38df" exitCode=2 Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.557651 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerDied","Data":"7577ed8540d975f423976305bb3705f4eafd461274edd3901d23c43bda9c9e8d"} Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.557680 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerDied","Data":"eb0021f9063cb73a53bb654661860c3a3b3327a7cda81647f212d0971fdf38df"} Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.561113 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.561105 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"834dff46-eb6f-4646-830a-a665bcd1461b","Type":"ContainerDied","Data":"ecb263a3f5b10c23831e4ee7fd54def8a4d99d2a95fc6de0d0c90daa08713bec"} Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.561163 4820 scope.go:117] "RemoveContainer" containerID="ba5dc266666ca090cdd8d6040e3399624af7fc09e03a870c966dd4c4e316dd5e" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.590117 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.598475 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.616586 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:40:59 crc kubenswrapper[4820]: E0201 14:40:59.616994 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834dff46-eb6f-4646-830a-a665bcd1461b" containerName="kube-state-metrics" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.617018 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="834dff46-eb6f-4646-830a-a665bcd1461b" containerName="kube-state-metrics" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.617190 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="834dff46-eb6f-4646-830a-a665bcd1461b" containerName="kube-state-metrics" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.617758 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.620161 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.620660 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.625755 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.643803 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.643836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw485\" (UniqueName: \"kubernetes.io/projected/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-api-access-jw485\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.643935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.644002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.745963 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.746099 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.746219 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.746244 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw485\" (UniqueName: \"kubernetes.io/projected/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-api-access-jw485\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.750397 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.751016 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.751342 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f94e4784-db7a-4f03-ac77-8a50d0b3479d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.763136 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw485\" (UniqueName: \"kubernetes.io/projected/f94e4784-db7a-4f03-ac77-8a50d0b3479d-kube-api-access-jw485\") pod \"kube-state-metrics-0\" (UID: \"f94e4784-db7a-4f03-ac77-8a50d0b3479d\") " pod="openstack/kube-state-metrics-0" Feb 01 14:40:59 crc kubenswrapper[4820]: I0201 14:40:59.944351 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 01 14:41:00 crc kubenswrapper[4820]: I0201 14:41:00.432904 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 01 14:41:00 crc kubenswrapper[4820]: W0201 14:41:00.436836 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf94e4784_db7a_4f03_ac77_8a50d0b3479d.slice/crio-cda4d280a60631892b867d432ca95a8bc8813a88d0d506ad9432f5139c94357f WatchSource:0}: Error finding container cda4d280a60631892b867d432ca95a8bc8813a88d0d506ad9432f5139c94357f: Status 404 returned error can't find the container with id cda4d280a60631892b867d432ca95a8bc8813a88d0d506ad9432f5139c94357f Feb 01 14:41:00 crc kubenswrapper[4820]: I0201 14:41:00.576381 4820 generic.go:334] "Generic (PLEG): container finished" podID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerID="cb169db8a65e1c2d5be7a7a22a3fa4c091eaecfa985f45b5fefcf1d095f29214" exitCode=0 Feb 01 14:41:00 crc kubenswrapper[4820]: I0201 14:41:00.576449 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerDied","Data":"cb169db8a65e1c2d5be7a7a22a3fa4c091eaecfa985f45b5fefcf1d095f29214"} Feb 01 14:41:00 crc kubenswrapper[4820]: I0201 14:41:00.578034 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f94e4784-db7a-4f03-ac77-8a50d0b3479d","Type":"ContainerStarted","Data":"cda4d280a60631892b867d432ca95a8bc8813a88d0d506ad9432f5139c94357f"} Feb 01 14:41:01 crc kubenswrapper[4820]: I0201 14:41:01.208814 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834dff46-eb6f-4646-830a-a665bcd1461b" path="/var/lib/kubelet/pods/834dff46-eb6f-4646-830a-a665bcd1461b/volumes" Feb 01 14:41:01 crc kubenswrapper[4820]: I0201 14:41:01.586858 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f94e4784-db7a-4f03-ac77-8a50d0b3479d","Type":"ContainerStarted","Data":"991163c22f69ae4e33853bba18f314381a87b1bc9f31ee76a27e0a6525533dbc"} Feb 01 14:41:01 crc kubenswrapper[4820]: I0201 14:41:01.587026 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 01 14:41:01 crc kubenswrapper[4820]: I0201 14:41:01.614513 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.258922526 podStartE2EDuration="2.614387591s" podCreationTimestamp="2026-02-01 14:40:59 +0000 UTC" firstStartedPulling="2026-02-01 14:41:00.438590871 +0000 UTC m=+1201.958957145" lastFinishedPulling="2026-02-01 14:41:00.794055926 +0000 UTC m=+1202.314422210" observedRunningTime="2026-02-01 14:41:01.605826202 +0000 UTC m=+1203.126192496" watchObservedRunningTime="2026-02-01 14:41:01.614387591 +0000 UTC m=+1203.134753875" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.605996 4820 generic.go:334] "Generic (PLEG): container finished" podID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerID="797ad3675213eb9431dd3ac3e9c01b54dfaa2c9921a88da45c913b78f6da7336" exitCode=0 Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.606080 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerDied","Data":"797ad3675213eb9431dd3ac3e9c01b54dfaa2c9921a88da45c913b78f6da7336"} Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.739728 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.835392 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-sg-core-conf-yaml\") pod \"845e48b0-71b1-4f3b-82d2-db1f00c69601\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.835711 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-run-httpd\") pod \"845e48b0-71b1-4f3b-82d2-db1f00c69601\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.835787 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-log-httpd\") pod \"845e48b0-71b1-4f3b-82d2-db1f00c69601\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.835936 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7qh6\" (UniqueName: \"kubernetes.io/projected/845e48b0-71b1-4f3b-82d2-db1f00c69601-kube-api-access-c7qh6\") pod \"845e48b0-71b1-4f3b-82d2-db1f00c69601\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.836122 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-combined-ca-bundle\") pod \"845e48b0-71b1-4f3b-82d2-db1f00c69601\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.836220 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-config-data\") pod \"845e48b0-71b1-4f3b-82d2-db1f00c69601\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.836291 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-scripts\") pod \"845e48b0-71b1-4f3b-82d2-db1f00c69601\" (UID: \"845e48b0-71b1-4f3b-82d2-db1f00c69601\") " Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.841333 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "845e48b0-71b1-4f3b-82d2-db1f00c69601" (UID: "845e48b0-71b1-4f3b-82d2-db1f00c69601"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.850031 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "845e48b0-71b1-4f3b-82d2-db1f00c69601" (UID: "845e48b0-71b1-4f3b-82d2-db1f00c69601"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.855677 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-scripts" (OuterVolumeSpecName: "scripts") pod "845e48b0-71b1-4f3b-82d2-db1f00c69601" (UID: "845e48b0-71b1-4f3b-82d2-db1f00c69601"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.874242 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845e48b0-71b1-4f3b-82d2-db1f00c69601-kube-api-access-c7qh6" (OuterVolumeSpecName: "kube-api-access-c7qh6") pod "845e48b0-71b1-4f3b-82d2-db1f00c69601" (UID: "845e48b0-71b1-4f3b-82d2-db1f00c69601"). InnerVolumeSpecName "kube-api-access-c7qh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.915292 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "845e48b0-71b1-4f3b-82d2-db1f00c69601" (UID: "845e48b0-71b1-4f3b-82d2-db1f00c69601"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.938593 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.938638 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.938648 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.938656 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/845e48b0-71b1-4f3b-82d2-db1f00c69601-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.938665 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7qh6\" (UniqueName: \"kubernetes.io/projected/845e48b0-71b1-4f3b-82d2-db1f00c69601-kube-api-access-c7qh6\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.953529 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "845e48b0-71b1-4f3b-82d2-db1f00c69601" (UID: "845e48b0-71b1-4f3b-82d2-db1f00c69601"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:41:03 crc kubenswrapper[4820]: I0201 14:41:03.973035 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-config-data" (OuterVolumeSpecName: "config-data") pod "845e48b0-71b1-4f3b-82d2-db1f00c69601" (UID: "845e48b0-71b1-4f3b-82d2-db1f00c69601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.039951 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.039987 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845e48b0-71b1-4f3b-82d2-db1f00c69601-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.621379 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"845e48b0-71b1-4f3b-82d2-db1f00c69601","Type":"ContainerDied","Data":"f830af1f15eddc29f21ddcdbc6efaedde9e5379b89a3e7c475d2c478c910d729"} Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.621434 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.621441 4820 scope.go:117] "RemoveContainer" containerID="7577ed8540d975f423976305bb3705f4eafd461274edd3901d23c43bda9c9e8d" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.651238 4820 scope.go:117] "RemoveContainer" containerID="eb0021f9063cb73a53bb654661860c3a3b3327a7cda81647f212d0971fdf38df" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.661033 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.671599 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.674721 4820 scope.go:117] "RemoveContainer" containerID="797ad3675213eb9431dd3ac3e9c01b54dfaa2c9921a88da45c913b78f6da7336" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682099 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:41:04 crc kubenswrapper[4820]: E0201 14:41:04.682437 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="ceilometer-notification-agent" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682449 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="ceilometer-notification-agent" Feb 01 14:41:04 crc kubenswrapper[4820]: E0201 14:41:04.682462 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="proxy-httpd" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682469 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="proxy-httpd" Feb 01 14:41:04 crc kubenswrapper[4820]: E0201 14:41:04.682487 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="ceilometer-central-agent" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682493 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="ceilometer-central-agent" Feb 01 14:41:04 crc kubenswrapper[4820]: E0201 14:41:04.682505 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="sg-core" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682510 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="sg-core" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682689 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="ceilometer-notification-agent" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682718 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="sg-core" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682728 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="proxy-httpd" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.682741 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" containerName="ceilometer-central-agent" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.684415 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.686360 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.686616 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.686768 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.699097 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.731256 4820 scope.go:117] "RemoveContainer" containerID="cb169db8a65e1c2d5be7a7a22a3fa4c091eaecfa985f45b5fefcf1d095f29214" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.753647 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-config-data\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.753697 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.753785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-run-httpd\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.753836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-log-httpd\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.753857 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.753907 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-scripts\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.753931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktpl\" (UniqueName: \"kubernetes.io/projected/9855d957-a352-426f-8b46-0f77f47c0d6c-kube-api-access-hktpl\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.753968 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.855716 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.855768 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-config-data\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.855801 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.855898 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-run-httpd\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.855948 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-log-httpd\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.855974 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.856024 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-scripts\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.856059 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktpl\" (UniqueName: \"kubernetes.io/projected/9855d957-a352-426f-8b46-0f77f47c0d6c-kube-api-access-hktpl\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.857986 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-log-httpd\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.858242 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-run-httpd\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.860979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.861528 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.861844 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-scripts\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.862395 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.862532 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-config-data\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:04 crc kubenswrapper[4820]: I0201 14:41:04.874283 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktpl\" (UniqueName: \"kubernetes.io/projected/9855d957-a352-426f-8b46-0f77f47c0d6c-kube-api-access-hktpl\") pod \"ceilometer-0\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " pod="openstack/ceilometer-0" Feb 01 14:41:05 crc kubenswrapper[4820]: I0201 14:41:05.011530 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 14:41:05 crc kubenswrapper[4820]: I0201 14:41:05.212482 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845e48b0-71b1-4f3b-82d2-db1f00c69601" path="/var/lib/kubelet/pods/845e48b0-71b1-4f3b-82d2-db1f00c69601/volumes" Feb 01 14:41:05 crc kubenswrapper[4820]: I0201 14:41:05.460185 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 14:41:05 crc kubenswrapper[4820]: W0201 14:41:05.464455 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9855d957_a352_426f_8b46_0f77f47c0d6c.slice/crio-73fa06ff434fc50751ad0428b9bb9595d2aaf7ed22bfaab172662d836d0b0092 WatchSource:0}: Error finding container 73fa06ff434fc50751ad0428b9bb9595d2aaf7ed22bfaab172662d836d0b0092: Status 404 returned error can't find the container with id 73fa06ff434fc50751ad0428b9bb9595d2aaf7ed22bfaab172662d836d0b0092 Feb 01 14:41:05 crc kubenswrapper[4820]: I0201 14:41:05.633039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerStarted","Data":"73fa06ff434fc50751ad0428b9bb9595d2aaf7ed22bfaab172662d836d0b0092"} Feb 01 14:41:06 crc kubenswrapper[4820]: I0201 14:41:06.645384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerStarted","Data":"4b379e07fe57f063361a1ce621aa4a84c0e98687f48d8b08a4f1a9f37a91348f"} Feb 01 14:41:07 crc kubenswrapper[4820]: I0201 14:41:07.654410 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerStarted","Data":"baa6707e177e6515b30b164bfaaf7a0ec0d7d8f51bd4b5f6f8a26b93062a8f75"} Feb 01 14:41:07 crc kubenswrapper[4820]: I0201 14:41:07.654752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerStarted","Data":"90558a52ac83d67af670de478e94d8575dbd0d29f9c3d41de8632a79fcf19341"} Feb 01 14:41:09 crc kubenswrapper[4820]: I0201 14:41:09.956522 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 01 14:41:10 crc kubenswrapper[4820]: I0201 14:41:10.681492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerStarted","Data":"de55a0e41edf9855c9c4ab441b97d105ed9be3b272d8f2a729bd53f87631c592"} Feb 01 14:41:10 crc kubenswrapper[4820]: I0201 14:41:10.682943 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 01 14:41:10 crc kubenswrapper[4820]: I0201 14:41:10.717980 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.43148938 podStartE2EDuration="6.71795317s" podCreationTimestamp="2026-02-01 14:41:04 +0000 UTC" firstStartedPulling="2026-02-01 14:41:05.466606089 +0000 UTC m=+1206.986972383" lastFinishedPulling="2026-02-01 14:41:09.753069889 +0000 UTC m=+1211.273436173" observedRunningTime="2026-02-01 14:41:10.712547618 +0000 UTC m=+1212.232913922" watchObservedRunningTime="2026-02-01 14:41:10.71795317 +0000 UTC m=+1212.238319464" Feb 01 14:41:35 crc kubenswrapper[4820]: I0201 14:41:35.025511 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 01 14:41:43 crc kubenswrapper[4820]: I0201 14:41:43.485288 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:41:44 crc kubenswrapper[4820]: I0201 14:41:44.255295 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:41:47 crc kubenswrapper[4820]: I0201 14:41:47.934621 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" containerName="rabbitmq" containerID="cri-o://81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2" gracePeriod=604796 Feb 01 14:41:48 crc kubenswrapper[4820]: I0201 14:41:48.450244 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" containerName="rabbitmq" containerID="cri-o://75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12" gracePeriod=604796 Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.476848 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587302 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-server-conf\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587648 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-erlang-cookie\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587679 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-plugins\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587694 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587748 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-confd\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587790 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqzsg\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-kube-api-access-kqzsg\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-plugins-conf\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587923 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9cdc675-a849-4f24-bca1-ea5c04c55b52-erlang-cookie-secret\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.587978 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-config-data\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.588469 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.588660 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.588729 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9cdc675-a849-4f24-bca1-ea5c04c55b52-pod-info\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.588777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-tls\") pod \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\" (UID: \"e9cdc675-a849-4f24-bca1-ea5c04c55b52\") " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.589248 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.593474 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.595615 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.600228 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-kube-api-access-kqzsg" (OuterVolumeSpecName: "kube-api-access-kqzsg") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "kube-api-access-kqzsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.600336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e9cdc675-a849-4f24-bca1-ea5c04c55b52-pod-info" (OuterVolumeSpecName: "pod-info") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.627335 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9cdc675-a849-4f24-bca1-ea5c04c55b52-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.628343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-config-data" (OuterVolumeSpecName: "config-data") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.641814 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.645729 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-server-conf" (OuterVolumeSpecName: "server-conf") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691127 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691159 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-server-conf\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691169 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691178 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691210 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqzsg\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-kube-api-access-kqzsg\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691228 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e9cdc675-a849-4f24-bca1-ea5c04c55b52-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691238 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9cdc675-a849-4f24-bca1-ea5c04c55b52-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.691246 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e9cdc675-a849-4f24-bca1-ea5c04c55b52-pod-info\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.712111 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.725207 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e9cdc675-a849-4f24-bca1-ea5c04c55b52" (UID: "e9cdc675-a849-4f24-bca1-ea5c04c55b52"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.792818 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.792859 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e9cdc675-a849-4f24-bca1-ea5c04c55b52-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:54 crc kubenswrapper[4820]: I0201 14:41:54.957704 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.081794 4820 generic.go:334] "Generic (PLEG): container finished" podID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" containerID="75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12" exitCode=0 Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.081849 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c49d65b-e444-406e-8b45-e95ba6bbb52b","Type":"ContainerDied","Data":"75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12"} Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.081925 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c49d65b-e444-406e-8b45-e95ba6bbb52b","Type":"ContainerDied","Data":"d6522310a7fbc00eacd33e5fe9e9971a955c53c810766d4dbdedafab4758d7f7"} Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.081948 4820 scope.go:117] "RemoveContainer" containerID="75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.081919 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.084284 4820 generic.go:334] "Generic (PLEG): container finished" podID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" containerID="81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2" exitCode=0 Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.084314 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e9cdc675-a849-4f24-bca1-ea5c04c55b52","Type":"ContainerDied","Data":"81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2"} Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.084334 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e9cdc675-a849-4f24-bca1-ea5c04c55b52","Type":"ContainerDied","Data":"3623316050aeeb0566bce48e2e10ec8f59b074adf549d3b6cd4854bc4e266e8f"} Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.084624 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102413 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-config-data\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102450 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c49d65b-e444-406e-8b45-e95ba6bbb52b-pod-info\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102505 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-plugins\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102550 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102611 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-erlang-cookie\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102652 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-plugins-conf\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102687 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c49d65b-e444-406e-8b45-e95ba6bbb52b-erlang-cookie-secret\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102712 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-confd\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102755 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-tls\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102794 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-server-conf\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.102855 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vct5w\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-kube-api-access-vct5w\") pod \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\" (UID: \"4c49d65b-e444-406e-8b45-e95ba6bbb52b\") " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.103679 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.104226 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.106278 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.114995 4820 scope.go:117] "RemoveContainer" containerID="27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.120001 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.120644 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4c49d65b-e444-406e-8b45-e95ba6bbb52b-pod-info" (OuterVolumeSpecName: "pod-info") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.126814 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.130535 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c49d65b-e444-406e-8b45-e95ba6bbb52b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.133240 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-config-data" (OuterVolumeSpecName: "config-data") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.136374 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-kube-api-access-vct5w" (OuterVolumeSpecName: "kube-api-access-vct5w") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "kube-api-access-vct5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.168318 4820 scope.go:117] "RemoveContainer" containerID="75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12" Feb 01 14:41:55 crc kubenswrapper[4820]: E0201 14:41:55.168900 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12\": container with ID starting with 75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12 not found: ID does not exist" containerID="75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.168941 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12"} err="failed to get container status \"75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12\": rpc error: code = NotFound desc = could not find container \"75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12\": container with ID starting with 75f834141cda94d4a671e93fd2d8a1fb913883ce1307ecbcad100d5c62006d12 not found: ID does not exist" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.168967 4820 scope.go:117] "RemoveContainer" containerID="27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.169068 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:41:55 crc kubenswrapper[4820]: E0201 14:41:55.169417 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13\": container with ID starting with 27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13 not found: ID does not exist" containerID="27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.169436 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13"} err="failed to get container status \"27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13\": rpc error: code = NotFound desc = could not find container \"27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13\": container with ID starting with 27369d8748d90c3ec8b78c97a7e82fceb121c2da173c2abff85a7bc25bf5ce13 not found: ID does not exist" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.169448 4820 scope.go:117] "RemoveContainer" containerID="81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.184932 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204751 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vct5w\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-kube-api-access-vct5w\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204783 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204792 4820 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c49d65b-e444-406e-8b45-e95ba6bbb52b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204800 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204828 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204837 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204846 4820 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204854 4820 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c49d65b-e444-406e-8b45-e95ba6bbb52b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.204863 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.207645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-server-conf" (OuterVolumeSpecName: "server-conf") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.231007 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" path="/var/lib/kubelet/pods/e9cdc675-a849-4f24-bca1-ea5c04c55b52/volumes" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.240346 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.266279 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4c49d65b-e444-406e-8b45-e95ba6bbb52b" (UID: "4c49d65b-e444-406e-8b45-e95ba6bbb52b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.307400 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.307427 4820 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c49d65b-e444-406e-8b45-e95ba6bbb52b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.307437 4820 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c49d65b-e444-406e-8b45-e95ba6bbb52b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.314045 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:41:55 crc kubenswrapper[4820]: E0201 14:41:55.314395 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" containerName="rabbitmq" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.314412 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" containerName="rabbitmq" Feb 01 14:41:55 crc kubenswrapper[4820]: E0201 14:41:55.314450 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" containerName="setup-container" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.314460 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" containerName="setup-container" Feb 01 14:41:55 crc kubenswrapper[4820]: E0201 14:41:55.314470 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" containerName="setup-container" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.314477 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" containerName="setup-container" Feb 01 14:41:55 crc kubenswrapper[4820]: E0201 14:41:55.314493 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" containerName="rabbitmq" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.314500 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" containerName="rabbitmq" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.314689 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" containerName="rabbitmq" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.314718 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9cdc675-a849-4f24-bca1-ea5c04c55b52" containerName="rabbitmq" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.315557 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.315660 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.318272 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.318720 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.318957 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.319108 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.319177 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l6rfn" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.319106 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.319359 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.334215 4820 scope.go:117] "RemoveContainer" containerID="8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.367397 4820 scope.go:117] "RemoveContainer" containerID="81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2" Feb 01 14:41:55 crc kubenswrapper[4820]: E0201 14:41:55.367910 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2\": container with ID starting with 81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2 not found: ID does not exist" containerID="81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.367950 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2"} err="failed to get container status \"81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2\": rpc error: code = NotFound desc = could not find container \"81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2\": container with ID starting with 81b2c107480a1a650bf6fa3cbeb4f2affc419c4c25405a0b0581401cb113a6c2 not found: ID does not exist" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.367978 4820 scope.go:117] "RemoveContainer" containerID="8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d" Feb 01 14:41:55 crc kubenswrapper[4820]: E0201 14:41:55.368229 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d\": container with ID starting with 8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d not found: ID does not exist" containerID="8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.368281 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d"} err="failed to get container status \"8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d\": rpc error: code = NotFound desc = could not find container \"8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d\": container with ID starting with 8ca585fe94781bc03e18c5f6a239e1473552286942afa45312fecc7896c0516d not found: ID does not exist" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.408838 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-config-data\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.408894 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.408924 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.409947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-server-conf\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.410044 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.410077 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sclm\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-kube-api-access-8sclm\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.410160 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.410274 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68ed3721-fba2-41c2-bb4a-ee20df021175-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.410335 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.410440 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68ed3721-fba2-41c2-bb4a-ee20df021175-pod-info\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.410562 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.428011 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.443580 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.461287 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.463294 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.470932 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.471539 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.471777 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.472040 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.472379 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.472556 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wpl94" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.472666 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.475070 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519446 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519531 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68ed3721-fba2-41c2-bb4a-ee20df021175-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519563 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519599 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68ed3721-fba2-41c2-bb4a-ee20df021175-pod-info\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519651 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519693 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-config-data\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519715 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-server-conf\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519780 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.519804 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sclm\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-kube-api-access-8sclm\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.520948 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.521804 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-config-data\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.522801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.523295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.524660 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.525130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/68ed3721-fba2-41c2-bb4a-ee20df021175-server-conf\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.525730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/68ed3721-fba2-41c2-bb4a-ee20df021175-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.526922 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/68ed3721-fba2-41c2-bb4a-ee20df021175-pod-info\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.528013 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.528161 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.535968 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sclm\" (UniqueName: \"kubernetes.io/projected/68ed3721-fba2-41c2-bb4a-ee20df021175-kube-api-access-8sclm\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.561991 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"68ed3721-fba2-41c2-bb4a-ee20df021175\") " pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.621771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec613d0e-38d1-4ba1-950c-130a412ace9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.621914 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.621970 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.621995 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.622036 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgkjx\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-kube-api-access-wgkjx\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.622083 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.622118 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.622302 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.622388 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec613d0e-38d1-4ba1-950c-130a412ace9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.622419 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.622481 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.646551 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724514 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724539 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec613d0e-38d1-4ba1-950c-130a412ace9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec613d0e-38d1-4ba1-950c-130a412ace9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724734 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724777 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.724841 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgkjx\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-kube-api-access-wgkjx\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.725927 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.726822 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.728747 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.729550 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.729932 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.730416 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec613d0e-38d1-4ba1-950c-130a412ace9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.734084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.736827 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.742760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec613d0e-38d1-4ba1-950c-130a412ace9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.742760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec613d0e-38d1-4ba1-950c-130a412ace9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.748855 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgkjx\" (UniqueName: \"kubernetes.io/projected/ec613d0e-38d1-4ba1-950c-130a412ace9b-kube-api-access-wgkjx\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.760449 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec613d0e-38d1-4ba1-950c-130a412ace9b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:55 crc kubenswrapper[4820]: I0201 14:41:55.785181 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:41:56 crc kubenswrapper[4820]: I0201 14:41:56.079422 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 01 14:41:56 crc kubenswrapper[4820]: I0201 14:41:56.092911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68ed3721-fba2-41c2-bb4a-ee20df021175","Type":"ContainerStarted","Data":"d7867294a8f71f5da0a671fcb481d869c121e3b0c019a6849906aaf317161b20"} Feb 01 14:41:56 crc kubenswrapper[4820]: I0201 14:41:56.266488 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.106197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec613d0e-38d1-4ba1-950c-130a412ace9b","Type":"ContainerStarted","Data":"a2eceb25e6e4e242101759e5e0b05785037e09efb3479b864f1ac966620be871"} Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.227342 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c49d65b-e444-406e-8b45-e95ba6bbb52b" path="/var/lib/kubelet/pods/4c49d65b-e444-406e-8b45-e95ba6bbb52b/volumes" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.716846 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qtk24"] Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.719223 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.720975 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.731424 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qtk24"] Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.892364 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.892442 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.892526 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-config\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.892627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgthq\" (UniqueName: \"kubernetes.io/projected/c611b5b9-38b8-42a6-bbb4-661f53f74a52-kube-api-access-qgthq\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.892671 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.892699 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.993970 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.994064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.994125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.994223 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-config\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.994274 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgthq\" (UniqueName: \"kubernetes.io/projected/c611b5b9-38b8-42a6-bbb4-661f53f74a52-kube-api-access-qgthq\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.994311 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.995025 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.995057 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.995075 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.995266 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-config\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:57 crc kubenswrapper[4820]: I0201 14:41:57.995324 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:58 crc kubenswrapper[4820]: I0201 14:41:58.013030 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgthq\" (UniqueName: \"kubernetes.io/projected/c611b5b9-38b8-42a6-bbb4-661f53f74a52-kube-api-access-qgthq\") pod \"dnsmasq-dns-578b8d767c-qtk24\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:58 crc kubenswrapper[4820]: I0201 14:41:58.039344 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:41:58 crc kubenswrapper[4820]: I0201 14:41:58.114282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec613d0e-38d1-4ba1-950c-130a412ace9b","Type":"ContainerStarted","Data":"45dc0f5bb3d92f5e5ac09a23482951cb89abc03c8c1f3ab0b392986c58419de1"} Feb 01 14:41:58 crc kubenswrapper[4820]: I0201 14:41:58.115584 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68ed3721-fba2-41c2-bb4a-ee20df021175","Type":"ContainerStarted","Data":"de41cbf5a30e9227240a8b36d95671c0d76918a0d25528013e5c85fef26717e9"} Feb 01 14:41:58 crc kubenswrapper[4820]: I0201 14:41:58.521527 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qtk24"] Feb 01 14:41:59 crc kubenswrapper[4820]: I0201 14:41:59.126614 4820 generic.go:334] "Generic (PLEG): container finished" podID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" containerID="9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70" exitCode=0 Feb 01 14:41:59 crc kubenswrapper[4820]: I0201 14:41:59.126770 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" event={"ID":"c611b5b9-38b8-42a6-bbb4-661f53f74a52","Type":"ContainerDied","Data":"9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70"} Feb 01 14:41:59 crc kubenswrapper[4820]: I0201 14:41:59.127132 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" event={"ID":"c611b5b9-38b8-42a6-bbb4-661f53f74a52","Type":"ContainerStarted","Data":"b9292af4872a27937995cd3f9eb8991b147a8d0dc4d89363a8d675642dba2a55"} Feb 01 14:42:00 crc kubenswrapper[4820]: I0201 14:42:00.139162 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" event={"ID":"c611b5b9-38b8-42a6-bbb4-661f53f74a52","Type":"ContainerStarted","Data":"8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052"} Feb 01 14:42:00 crc kubenswrapper[4820]: I0201 14:42:00.139489 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.040749 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.064341 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" podStartSLOduration=11.064303507 podStartE2EDuration="11.064303507s" podCreationTimestamp="2026-02-01 14:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:42:00.188238304 +0000 UTC m=+1261.708604628" watchObservedRunningTime="2026-02-01 14:42:08.064303507 +0000 UTC m=+1269.584669801" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.111022 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-w7xrd"] Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.111286 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" podUID="558aab28-1ba2-46cc-9504-405fc50f326f" containerName="dnsmasq-dns" containerID="cri-o://b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8" gracePeriod=10 Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.223311 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-2f2ml"] Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.224754 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.251601 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-2f2ml"] Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.417821 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.417928 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.418174 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.418232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.418447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td87b\" (UniqueName: \"kubernetes.io/projected/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-kube-api-access-td87b\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.418497 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-config\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.520544 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.520820 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.520904 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td87b\" (UniqueName: \"kubernetes.io/projected/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-kube-api-access-td87b\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.520931 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-config\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.520969 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.520992 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.522742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.523078 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.523147 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-config\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.524486 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.525519 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.541611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td87b\" (UniqueName: \"kubernetes.io/projected/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-kube-api-access-td87b\") pod \"dnsmasq-dns-fbc59fbb7-2f2ml\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.581297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.717522 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.826043 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-nb\") pod \"558aab28-1ba2-46cc-9504-405fc50f326f\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.826187 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-config\") pod \"558aab28-1ba2-46cc-9504-405fc50f326f\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.826287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-sb\") pod \"558aab28-1ba2-46cc-9504-405fc50f326f\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.826477 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h289r\" (UniqueName: \"kubernetes.io/projected/558aab28-1ba2-46cc-9504-405fc50f326f-kube-api-access-h289r\") pod \"558aab28-1ba2-46cc-9504-405fc50f326f\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.826553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-dns-svc\") pod \"558aab28-1ba2-46cc-9504-405fc50f326f\" (UID: \"558aab28-1ba2-46cc-9504-405fc50f326f\") " Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.833200 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558aab28-1ba2-46cc-9504-405fc50f326f-kube-api-access-h289r" (OuterVolumeSpecName: "kube-api-access-h289r") pod "558aab28-1ba2-46cc-9504-405fc50f326f" (UID: "558aab28-1ba2-46cc-9504-405fc50f326f"). InnerVolumeSpecName "kube-api-access-h289r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.882062 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "558aab28-1ba2-46cc-9504-405fc50f326f" (UID: "558aab28-1ba2-46cc-9504-405fc50f326f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.891411 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-config" (OuterVolumeSpecName: "config") pod "558aab28-1ba2-46cc-9504-405fc50f326f" (UID: "558aab28-1ba2-46cc-9504-405fc50f326f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.892115 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "558aab28-1ba2-46cc-9504-405fc50f326f" (UID: "558aab28-1ba2-46cc-9504-405fc50f326f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.905946 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "558aab28-1ba2-46cc-9504-405fc50f326f" (UID: "558aab28-1ba2-46cc-9504-405fc50f326f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.928859 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.928898 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.928907 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.928917 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h289r\" (UniqueName: \"kubernetes.io/projected/558aab28-1ba2-46cc-9504-405fc50f326f-kube-api-access-h289r\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:08 crc kubenswrapper[4820]: I0201 14:42:08.928927 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558aab28-1ba2-46cc-9504-405fc50f326f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.075287 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-2f2ml"] Feb 01 14:42:09 crc kubenswrapper[4820]: W0201 14:42:09.081277 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4122afb2_67c9_4360_b5d0_72ab7b8bc7ca.slice/crio-61c277d84ce6185675862194cd952b584d681456607e9ff2e9abe7277a9fb22f WatchSource:0}: Error finding container 61c277d84ce6185675862194cd952b584d681456607e9ff2e9abe7277a9fb22f: Status 404 returned error can't find the container with id 61c277d84ce6185675862194cd952b584d681456607e9ff2e9abe7277a9fb22f Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.268467 4820 generic.go:334] "Generic (PLEG): container finished" podID="558aab28-1ba2-46cc-9504-405fc50f326f" containerID="b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8" exitCode=0 Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.268710 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.268727 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" event={"ID":"558aab28-1ba2-46cc-9504-405fc50f326f","Type":"ContainerDied","Data":"b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8"} Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.271580 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" event={"ID":"558aab28-1ba2-46cc-9504-405fc50f326f","Type":"ContainerDied","Data":"41168bc1d0de78f41962dd0ea7ebe627255bbd785820ae04354849629ca3823d"} Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.271618 4820 scope.go:117] "RemoveContainer" containerID="b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8" Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.274036 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" event={"ID":"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca","Type":"ContainerStarted","Data":"61c277d84ce6185675862194cd952b584d681456607e9ff2e9abe7277a9fb22f"} Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.367292 4820 scope.go:117] "RemoveContainer" containerID="7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98" Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.381842 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-w7xrd"] Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.384180 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-w7xrd"] Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.418711 4820 scope.go:117] "RemoveContainer" containerID="b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8" Feb 01 14:42:09 crc kubenswrapper[4820]: E0201 14:42:09.419189 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8\": container with ID starting with b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8 not found: ID does not exist" containerID="b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8" Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.419228 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8"} err="failed to get container status \"b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8\": rpc error: code = NotFound desc = could not find container \"b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8\": container with ID starting with b4385fc10a6e0340954ded696f86c91f43eab93b1bfe9f54aa6aa473118c3ba8 not found: ID does not exist" Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.419255 4820 scope.go:117] "RemoveContainer" containerID="7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98" Feb 01 14:42:09 crc kubenswrapper[4820]: E0201 14:42:09.419810 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98\": container with ID starting with 7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98 not found: ID does not exist" containerID="7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98" Feb 01 14:42:09 crc kubenswrapper[4820]: I0201 14:42:09.419850 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98"} err="failed to get container status \"7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98\": rpc error: code = NotFound desc = could not find container \"7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98\": container with ID starting with 7d9a4c08cff1a1a6412656a553f0510392cc0ed1df81f77cb725927ee90c5a98 not found: ID does not exist" Feb 01 14:42:10 crc kubenswrapper[4820]: I0201 14:42:10.287683 4820 generic.go:334] "Generic (PLEG): container finished" podID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" containerID="d480f3d9776484a4e4da498d3269efb7a9f4057630e34cbdf3fdf563a745444d" exitCode=0 Feb 01 14:42:10 crc kubenswrapper[4820]: I0201 14:42:10.287757 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" event={"ID":"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca","Type":"ContainerDied","Data":"d480f3d9776484a4e4da498d3269efb7a9f4057630e34cbdf3fdf563a745444d"} Feb 01 14:42:11 crc kubenswrapper[4820]: I0201 14:42:11.220244 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558aab28-1ba2-46cc-9504-405fc50f326f" path="/var/lib/kubelet/pods/558aab28-1ba2-46cc-9504-405fc50f326f/volumes" Feb 01 14:42:11 crc kubenswrapper[4820]: I0201 14:42:11.300114 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" event={"ID":"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca","Type":"ContainerStarted","Data":"bb61a2a412cc6566a6a86bd1d242c071235ea2a86c5269179e1ca27cd9176182"} Feb 01 14:42:11 crc kubenswrapper[4820]: I0201 14:42:11.300743 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:11 crc kubenswrapper[4820]: I0201 14:42:11.339612 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" podStartSLOduration=3.339578249 podStartE2EDuration="3.339578249s" podCreationTimestamp="2026-02-01 14:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:42:11.320089471 +0000 UTC m=+1272.840455785" watchObservedRunningTime="2026-02-01 14:42:11.339578249 +0000 UTC m=+1272.859944573" Feb 01 14:42:13 crc kubenswrapper[4820]: I0201 14:42:13.566126 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68d4b6d797-w7xrd" podUID="558aab28-1ba2-46cc-9504-405fc50f326f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.182:5353: i/o timeout" Feb 01 14:42:18 crc kubenswrapper[4820]: I0201 14:42:18.583112 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 14:42:18 crc kubenswrapper[4820]: I0201 14:42:18.663276 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qtk24"] Feb 01 14:42:18 crc kubenswrapper[4820]: I0201 14:42:18.663991 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" podUID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" containerName="dnsmasq-dns" containerID="cri-o://8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052" gracePeriod=10 Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.116739 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.242540 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.242642 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.270087 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-dns-svc\") pod \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.270169 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-config\") pod \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.270192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-sb\") pod \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.270235 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-nb\") pod \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.270294 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-openstack-edpm-ipam\") pod \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.270379 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgthq\" (UniqueName: \"kubernetes.io/projected/c611b5b9-38b8-42a6-bbb4-661f53f74a52-kube-api-access-qgthq\") pod \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\" (UID: \"c611b5b9-38b8-42a6-bbb4-661f53f74a52\") " Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.276379 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c611b5b9-38b8-42a6-bbb4-661f53f74a52-kube-api-access-qgthq" (OuterVolumeSpecName: "kube-api-access-qgthq") pod "c611b5b9-38b8-42a6-bbb4-661f53f74a52" (UID: "c611b5b9-38b8-42a6-bbb4-661f53f74a52"). InnerVolumeSpecName "kube-api-access-qgthq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.319491 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c611b5b9-38b8-42a6-bbb4-661f53f74a52" (UID: "c611b5b9-38b8-42a6-bbb4-661f53f74a52"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.322828 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c611b5b9-38b8-42a6-bbb4-661f53f74a52" (UID: "c611b5b9-38b8-42a6-bbb4-661f53f74a52"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.325509 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-config" (OuterVolumeSpecName: "config") pod "c611b5b9-38b8-42a6-bbb4-661f53f74a52" (UID: "c611b5b9-38b8-42a6-bbb4-661f53f74a52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.326319 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c611b5b9-38b8-42a6-bbb4-661f53f74a52" (UID: "c611b5b9-38b8-42a6-bbb4-661f53f74a52"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.340987 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c611b5b9-38b8-42a6-bbb4-661f53f74a52" (UID: "c611b5b9-38b8-42a6-bbb4-661f53f74a52"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372155 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372202 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372214 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgthq\" (UniqueName: \"kubernetes.io/projected/c611b5b9-38b8-42a6-bbb4-661f53f74a52-kube-api-access-qgthq\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372226 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372238 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-config\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372257 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c611b5b9-38b8-42a6-bbb4-661f53f74a52-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372637 4820 generic.go:334] "Generic (PLEG): container finished" podID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" containerID="8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052" exitCode=0 Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372672 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" event={"ID":"c611b5b9-38b8-42a6-bbb4-661f53f74a52","Type":"ContainerDied","Data":"8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052"} Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372689 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372708 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qtk24" event={"ID":"c611b5b9-38b8-42a6-bbb4-661f53f74a52","Type":"ContainerDied","Data":"b9292af4872a27937995cd3f9eb8991b147a8d0dc4d89363a8d675642dba2a55"} Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.372739 4820 scope.go:117] "RemoveContainer" containerID="8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.409309 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qtk24"] Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.418434 4820 scope.go:117] "RemoveContainer" containerID="9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.420663 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qtk24"] Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.444013 4820 scope.go:117] "RemoveContainer" containerID="8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052" Feb 01 14:42:19 crc kubenswrapper[4820]: E0201 14:42:19.444529 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052\": container with ID starting with 8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052 not found: ID does not exist" containerID="8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.444634 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052"} err="failed to get container status \"8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052\": rpc error: code = NotFound desc = could not find container \"8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052\": container with ID starting with 8d8ae610646e59a43f0c5537d0ca5d726afc8b698c9acbc55909177146168052 not found: ID does not exist" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.444740 4820 scope.go:117] "RemoveContainer" containerID="9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70" Feb 01 14:42:19 crc kubenswrapper[4820]: E0201 14:42:19.445242 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70\": container with ID starting with 9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70 not found: ID does not exist" containerID="9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70" Feb 01 14:42:19 crc kubenswrapper[4820]: I0201 14:42:19.445454 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70"} err="failed to get container status \"9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70\": rpc error: code = NotFound desc = could not find container \"9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70\": container with ID starting with 9853fa08f04313467ea123fd4a578a3a3a910ce41dacaa5d075b466f63ef3f70 not found: ID does not exist" Feb 01 14:42:21 crc kubenswrapper[4820]: I0201 14:42:21.209014 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" path="/var/lib/kubelet/pods/c611b5b9-38b8-42a6-bbb4-661f53f74a52/volumes" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.864504 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb"] Feb 01 14:42:28 crc kubenswrapper[4820]: E0201 14:42:28.867963 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558aab28-1ba2-46cc-9504-405fc50f326f" containerName="dnsmasq-dns" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.867992 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="558aab28-1ba2-46cc-9504-405fc50f326f" containerName="dnsmasq-dns" Feb 01 14:42:28 crc kubenswrapper[4820]: E0201 14:42:28.868029 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" containerName="init" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.868036 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" containerName="init" Feb 01 14:42:28 crc kubenswrapper[4820]: E0201 14:42:28.868045 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" containerName="dnsmasq-dns" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.868051 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" containerName="dnsmasq-dns" Feb 01 14:42:28 crc kubenswrapper[4820]: E0201 14:42:28.868064 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558aab28-1ba2-46cc-9504-405fc50f326f" containerName="init" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.868069 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="558aab28-1ba2-46cc-9504-405fc50f326f" containerName="init" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.868245 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="558aab28-1ba2-46cc-9504-405fc50f326f" containerName="dnsmasq-dns" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.868266 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c611b5b9-38b8-42a6-bbb4-661f53f74a52" containerName="dnsmasq-dns" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.869082 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.873178 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.873380 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.873653 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.873924 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.882239 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb"] Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.947647 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqdv\" (UniqueName: \"kubernetes.io/projected/583cc935-a059-4d89-8113-5f03c1ad96ca-kube-api-access-ztqdv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.949861 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.949997 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:28 crc kubenswrapper[4820]: I0201 14:42:28.950121 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.051811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqdv\" (UniqueName: \"kubernetes.io/projected/583cc935-a059-4d89-8113-5f03c1ad96ca-kube-api-access-ztqdv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.051923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.051968 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.052052 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.058713 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.058729 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.059610 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.071242 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqdv\" (UniqueName: \"kubernetes.io/projected/583cc935-a059-4d89-8113-5f03c1ad96ca-kube-api-access-ztqdv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.194316 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.708431 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb"] Feb 01 14:42:29 crc kubenswrapper[4820]: I0201 14:42:29.722280 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 14:42:30 crc kubenswrapper[4820]: I0201 14:42:30.483673 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" event={"ID":"583cc935-a059-4d89-8113-5f03c1ad96ca","Type":"ContainerStarted","Data":"ec09047a217ff63697b361419d579e4eb0d31c4158c24015ddd2ec288d4f1bb9"} Feb 01 14:42:30 crc kubenswrapper[4820]: I0201 14:42:30.485142 4820 generic.go:334] "Generic (PLEG): container finished" podID="68ed3721-fba2-41c2-bb4a-ee20df021175" containerID="de41cbf5a30e9227240a8b36d95671c0d76918a0d25528013e5c85fef26717e9" exitCode=0 Feb 01 14:42:30 crc kubenswrapper[4820]: I0201 14:42:30.485198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68ed3721-fba2-41c2-bb4a-ee20df021175","Type":"ContainerDied","Data":"de41cbf5a30e9227240a8b36d95671c0d76918a0d25528013e5c85fef26717e9"} Feb 01 14:42:30 crc kubenswrapper[4820]: I0201 14:42:30.487353 4820 generic.go:334] "Generic (PLEG): container finished" podID="ec613d0e-38d1-4ba1-950c-130a412ace9b" containerID="45dc0f5bb3d92f5e5ac09a23482951cb89abc03c8c1f3ab0b392986c58419de1" exitCode=0 Feb 01 14:42:30 crc kubenswrapper[4820]: I0201 14:42:30.487393 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec613d0e-38d1-4ba1-950c-130a412ace9b","Type":"ContainerDied","Data":"45dc0f5bb3d92f5e5ac09a23482951cb89abc03c8c1f3ab0b392986c58419de1"} Feb 01 14:42:31 crc kubenswrapper[4820]: I0201 14:42:31.499960 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"68ed3721-fba2-41c2-bb4a-ee20df021175","Type":"ContainerStarted","Data":"a207b8e9e6d50e0482262cf65580a83908869df4c985404ae06da00f3940b8ff"} Feb 01 14:42:31 crc kubenswrapper[4820]: I0201 14:42:31.500848 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 01 14:42:31 crc kubenswrapper[4820]: I0201 14:42:31.503148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec613d0e-38d1-4ba1-950c-130a412ace9b","Type":"ContainerStarted","Data":"247439a8c7581414665f4d30d389713483cf0c5f5bd56e686c587a4ceaf80130"} Feb 01 14:42:31 crc kubenswrapper[4820]: I0201 14:42:31.503562 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:42:31 crc kubenswrapper[4820]: I0201 14:42:31.553752 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.553733225 podStartE2EDuration="36.553733225s" podCreationTimestamp="2026-02-01 14:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:42:31.551388757 +0000 UTC m=+1293.071755041" watchObservedRunningTime="2026-02-01 14:42:31.553733225 +0000 UTC m=+1293.074099509" Feb 01 14:42:31 crc kubenswrapper[4820]: I0201 14:42:31.557977 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.557966679 podStartE2EDuration="36.557966679s" podCreationTimestamp="2026-02-01 14:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 14:42:31.529193814 +0000 UTC m=+1293.049560098" watchObservedRunningTime="2026-02-01 14:42:31.557966679 +0000 UTC m=+1293.078332963" Feb 01 14:42:39 crc kubenswrapper[4820]: I0201 14:42:39.576709 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" event={"ID":"583cc935-a059-4d89-8113-5f03c1ad96ca","Type":"ContainerStarted","Data":"bbdad66549df31e3945ef80a25cb22592210e3e4216b5c7f892a1a4fece6dec3"} Feb 01 14:42:39 crc kubenswrapper[4820]: I0201 14:42:39.599214 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" podStartSLOduration=2.952197967 podStartE2EDuration="11.599193301s" podCreationTimestamp="2026-02-01 14:42:28 +0000 UTC" firstStartedPulling="2026-02-01 14:42:29.722053658 +0000 UTC m=+1291.242419942" lastFinishedPulling="2026-02-01 14:42:38.369048992 +0000 UTC m=+1299.889415276" observedRunningTime="2026-02-01 14:42:39.592687593 +0000 UTC m=+1301.113053897" watchObservedRunningTime="2026-02-01 14:42:39.599193301 +0000 UTC m=+1301.119559585" Feb 01 14:42:45 crc kubenswrapper[4820]: I0201 14:42:45.651107 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 01 14:42:45 crc kubenswrapper[4820]: I0201 14:42:45.788138 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 01 14:42:49 crc kubenswrapper[4820]: I0201 14:42:49.242166 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:42:49 crc kubenswrapper[4820]: I0201 14:42:49.242796 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:42:49 crc kubenswrapper[4820]: I0201 14:42:49.656006 4820 generic.go:334] "Generic (PLEG): container finished" podID="583cc935-a059-4d89-8113-5f03c1ad96ca" containerID="bbdad66549df31e3945ef80a25cb22592210e3e4216b5c7f892a1a4fece6dec3" exitCode=0 Feb 01 14:42:49 crc kubenswrapper[4820]: I0201 14:42:49.656054 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" event={"ID":"583cc935-a059-4d89-8113-5f03c1ad96ca","Type":"ContainerDied","Data":"bbdad66549df31e3945ef80a25cb22592210e3e4216b5c7f892a1a4fece6dec3"} Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.180236 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.265539 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztqdv\" (UniqueName: \"kubernetes.io/projected/583cc935-a059-4d89-8113-5f03c1ad96ca-kube-api-access-ztqdv\") pod \"583cc935-a059-4d89-8113-5f03c1ad96ca\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.265640 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-repo-setup-combined-ca-bundle\") pod \"583cc935-a059-4d89-8113-5f03c1ad96ca\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.265716 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-ssh-key-openstack-edpm-ipam\") pod \"583cc935-a059-4d89-8113-5f03c1ad96ca\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.265803 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-inventory\") pod \"583cc935-a059-4d89-8113-5f03c1ad96ca\" (UID: \"583cc935-a059-4d89-8113-5f03c1ad96ca\") " Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.278562 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "583cc935-a059-4d89-8113-5f03c1ad96ca" (UID: "583cc935-a059-4d89-8113-5f03c1ad96ca"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.283145 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583cc935-a059-4d89-8113-5f03c1ad96ca-kube-api-access-ztqdv" (OuterVolumeSpecName: "kube-api-access-ztqdv") pod "583cc935-a059-4d89-8113-5f03c1ad96ca" (UID: "583cc935-a059-4d89-8113-5f03c1ad96ca"). InnerVolumeSpecName "kube-api-access-ztqdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.292447 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "583cc935-a059-4d89-8113-5f03c1ad96ca" (UID: "583cc935-a059-4d89-8113-5f03c1ad96ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.293844 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-inventory" (OuterVolumeSpecName: "inventory") pod "583cc935-a059-4d89-8113-5f03c1ad96ca" (UID: "583cc935-a059-4d89-8113-5f03c1ad96ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.367856 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.368258 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztqdv\" (UniqueName: \"kubernetes.io/projected/583cc935-a059-4d89-8113-5f03c1ad96ca-kube-api-access-ztqdv\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.368357 4820 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.368441 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/583cc935-a059-4d89-8113-5f03c1ad96ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.677821 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" event={"ID":"583cc935-a059-4d89-8113-5f03c1ad96ca","Type":"ContainerDied","Data":"ec09047a217ff63697b361419d579e4eb0d31c4158c24015ddd2ec288d4f1bb9"} Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.677860 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec09047a217ff63697b361419d579e4eb0d31c4158c24015ddd2ec288d4f1bb9" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.678366 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.784276 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb"] Feb 01 14:42:51 crc kubenswrapper[4820]: E0201 14:42:51.784615 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583cc935-a059-4d89-8113-5f03c1ad96ca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.784633 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="583cc935-a059-4d89-8113-5f03c1ad96ca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.784813 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="583cc935-a059-4d89-8113-5f03c1ad96ca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.785460 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.787401 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.789439 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.790642 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.795132 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.796752 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb"] Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.879408 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.879498 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.879564 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvm8z\" (UniqueName: \"kubernetes.io/projected/10758666-3fd3-4e5b-9afc-56c22f714fba-kube-api-access-pvm8z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.879722 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.981609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.981690 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.981717 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvm8z\" (UniqueName: \"kubernetes.io/projected/10758666-3fd3-4e5b-9afc-56c22f714fba-kube-api-access-pvm8z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.981769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.985538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.985549 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.994715 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:51 crc kubenswrapper[4820]: I0201 14:42:51.996362 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvm8z\" (UniqueName: \"kubernetes.io/projected/10758666-3fd3-4e5b-9afc-56c22f714fba-kube-api-access-pvm8z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:52 crc kubenswrapper[4820]: I0201 14:42:52.116239 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:42:52 crc kubenswrapper[4820]: I0201 14:42:52.653790 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb"] Feb 01 14:42:52 crc kubenswrapper[4820]: I0201 14:42:52.687167 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" event={"ID":"10758666-3fd3-4e5b-9afc-56c22f714fba","Type":"ContainerStarted","Data":"f69d453f28ee6ce87cd23e9495f67ed3cb27c19bccd0e5fb72f10875c55aa721"} Feb 01 14:42:53 crc kubenswrapper[4820]: I0201 14:42:53.699041 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" event={"ID":"10758666-3fd3-4e5b-9afc-56c22f714fba","Type":"ContainerStarted","Data":"5e39b15372a7cc7be94ed362c777fb49817d99d10390cbc0a10eb2720cf92df6"} Feb 01 14:42:53 crc kubenswrapper[4820]: I0201 14:42:53.727523 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" podStartSLOduration=2.330789758 podStartE2EDuration="2.727495322s" podCreationTimestamp="2026-02-01 14:42:51 +0000 UTC" firstStartedPulling="2026-02-01 14:42:52.656974821 +0000 UTC m=+1314.177341105" lastFinishedPulling="2026-02-01 14:42:53.053680385 +0000 UTC m=+1314.574046669" observedRunningTime="2026-02-01 14:42:53.714095424 +0000 UTC m=+1315.234461708" watchObservedRunningTime="2026-02-01 14:42:53.727495322 +0000 UTC m=+1315.247861646" Feb 01 14:43:05 crc kubenswrapper[4820]: I0201 14:43:05.025809 4820 scope.go:117] "RemoveContainer" containerID="f2f787e2f9ab3ca64923d1e68673f434c28a6638f67baa009603248e9682ef60" Feb 01 14:43:19 crc kubenswrapper[4820]: I0201 14:43:19.242328 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:43:19 crc kubenswrapper[4820]: I0201 14:43:19.242930 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:43:19 crc kubenswrapper[4820]: I0201 14:43:19.242979 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:43:19 crc kubenswrapper[4820]: I0201 14:43:19.243711 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2741853f96eda4464b2d41189acc694c25bdbd3ba9a0c898eedf42557ed1eae0"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:43:19 crc kubenswrapper[4820]: I0201 14:43:19.243778 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://2741853f96eda4464b2d41189acc694c25bdbd3ba9a0c898eedf42557ed1eae0" gracePeriod=600 Feb 01 14:43:19 crc kubenswrapper[4820]: I0201 14:43:19.933312 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="2741853f96eda4464b2d41189acc694c25bdbd3ba9a0c898eedf42557ed1eae0" exitCode=0 Feb 01 14:43:19 crc kubenswrapper[4820]: I0201 14:43:19.933358 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"2741853f96eda4464b2d41189acc694c25bdbd3ba9a0c898eedf42557ed1eae0"} Feb 01 14:43:19 crc kubenswrapper[4820]: I0201 14:43:19.933627 4820 scope.go:117] "RemoveContainer" containerID="18339407f9a7bd299b3086430fb392c91bda2f44145a4ddfb153b9aef0bd2fe6" Feb 01 14:43:20 crc kubenswrapper[4820]: I0201 14:43:20.944430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b"} Feb 01 14:44:05 crc kubenswrapper[4820]: I0201 14:44:05.109555 4820 scope.go:117] "RemoveContainer" containerID="3c26365a04db790a3339f11b707fd2e73ab790b3e5c3fbf036e66a3cdd763675" Feb 01 14:44:05 crc kubenswrapper[4820]: I0201 14:44:05.152689 4820 scope.go:117] "RemoveContainer" containerID="eb41abd1ad58d87c9f4b0faf91fd472dc995b0d23fb18cfb12e25e74d82b128d" Feb 01 14:44:05 crc kubenswrapper[4820]: I0201 14:44:05.192722 4820 scope.go:117] "RemoveContainer" containerID="62a7b3084fc031919202118c8db803cf0f9ff03d330186cce16c1d6cbc815a9e" Feb 01 14:44:05 crc kubenswrapper[4820]: I0201 14:44:05.288900 4820 scope.go:117] "RemoveContainer" containerID="c308d66f737b30b0c8bf257ee150954ad305db019336d8cf1ce28a6e8903a3f6" Feb 01 14:44:05 crc kubenswrapper[4820]: I0201 14:44:05.322647 4820 scope.go:117] "RemoveContainer" containerID="b14cae70f0a2768f84f31bfce59ec59867d513125323ca28f2249db615b54355" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.153211 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx"] Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.155595 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.165023 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.165075 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.168763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx"] Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.181277 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6821db-9b76-463b-b4ef-cbb8315b3666-config-volume\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.181399 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvs2\" (UniqueName: \"kubernetes.io/projected/4b6821db-9b76-463b-b4ef-cbb8315b3666-kube-api-access-qqvs2\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.181449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6821db-9b76-463b-b4ef-cbb8315b3666-secret-volume\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.283452 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6821db-9b76-463b-b4ef-cbb8315b3666-config-volume\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.283769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvs2\" (UniqueName: \"kubernetes.io/projected/4b6821db-9b76-463b-b4ef-cbb8315b3666-kube-api-access-qqvs2\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.283923 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6821db-9b76-463b-b4ef-cbb8315b3666-secret-volume\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.284402 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6821db-9b76-463b-b4ef-cbb8315b3666-config-volume\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.289633 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6821db-9b76-463b-b4ef-cbb8315b3666-secret-volume\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.299770 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvs2\" (UniqueName: \"kubernetes.io/projected/4b6821db-9b76-463b-b4ef-cbb8315b3666-kube-api-access-qqvs2\") pod \"collect-profiles-29499285-f4lrx\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.494600 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:00 crc kubenswrapper[4820]: I0201 14:45:00.918397 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx"] Feb 01 14:45:01 crc kubenswrapper[4820]: I0201 14:45:01.836021 4820 generic.go:334] "Generic (PLEG): container finished" podID="4b6821db-9b76-463b-b4ef-cbb8315b3666" containerID="10c3254dffd9946f68dbe7972ad9af89e93a9b9ef49e71985be377a6ee4a6d18" exitCode=0 Feb 01 14:45:01 crc kubenswrapper[4820]: I0201 14:45:01.836198 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" event={"ID":"4b6821db-9b76-463b-b4ef-cbb8315b3666","Type":"ContainerDied","Data":"10c3254dffd9946f68dbe7972ad9af89e93a9b9ef49e71985be377a6ee4a6d18"} Feb 01 14:45:01 crc kubenswrapper[4820]: I0201 14:45:01.836450 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" event={"ID":"4b6821db-9b76-463b-b4ef-cbb8315b3666","Type":"ContainerStarted","Data":"a300862e89965c298677ed003af6185e3d027f0a3ccecedeb7bb3a16ca110e57"} Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.135524 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.333421 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6821db-9b76-463b-b4ef-cbb8315b3666-config-volume\") pod \"4b6821db-9b76-463b-b4ef-cbb8315b3666\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.333480 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvs2\" (UniqueName: \"kubernetes.io/projected/4b6821db-9b76-463b-b4ef-cbb8315b3666-kube-api-access-qqvs2\") pod \"4b6821db-9b76-463b-b4ef-cbb8315b3666\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.333520 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6821db-9b76-463b-b4ef-cbb8315b3666-secret-volume\") pod \"4b6821db-9b76-463b-b4ef-cbb8315b3666\" (UID: \"4b6821db-9b76-463b-b4ef-cbb8315b3666\") " Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.334621 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6821db-9b76-463b-b4ef-cbb8315b3666-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b6821db-9b76-463b-b4ef-cbb8315b3666" (UID: "4b6821db-9b76-463b-b4ef-cbb8315b3666"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.338962 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6821db-9b76-463b-b4ef-cbb8315b3666-kube-api-access-qqvs2" (OuterVolumeSpecName: "kube-api-access-qqvs2") pod "4b6821db-9b76-463b-b4ef-cbb8315b3666" (UID: "4b6821db-9b76-463b-b4ef-cbb8315b3666"). InnerVolumeSpecName "kube-api-access-qqvs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.338963 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6821db-9b76-463b-b4ef-cbb8315b3666-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b6821db-9b76-463b-b4ef-cbb8315b3666" (UID: "4b6821db-9b76-463b-b4ef-cbb8315b3666"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.436282 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6821db-9b76-463b-b4ef-cbb8315b3666-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.436326 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvs2\" (UniqueName: \"kubernetes.io/projected/4b6821db-9b76-463b-b4ef-cbb8315b3666-kube-api-access-qqvs2\") on node \"crc\" DevicePath \"\"" Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.436337 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6821db-9b76-463b-b4ef-cbb8315b3666-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.852661 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" event={"ID":"4b6821db-9b76-463b-b4ef-cbb8315b3666","Type":"ContainerDied","Data":"a300862e89965c298677ed003af6185e3d027f0a3ccecedeb7bb3a16ca110e57"} Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.853251 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a300862e89965c298677ed003af6185e3d027f0a3ccecedeb7bb3a16ca110e57" Feb 01 14:45:03 crc kubenswrapper[4820]: I0201 14:45:03.852726 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx" Feb 01 14:45:05 crc kubenswrapper[4820]: I0201 14:45:05.449252 4820 scope.go:117] "RemoveContainer" containerID="cab89cf77f7942aab7c3d7b2fcc3067eb40dfd940b9c01e9d394c5d223cce512" Feb 01 14:45:05 crc kubenswrapper[4820]: I0201 14:45:05.471865 4820 scope.go:117] "RemoveContainer" containerID="3140d85919217657478a7a1bb0b5275959e9b9169e07ab1d943a0747b8af12cf" Feb 01 14:45:35 crc kubenswrapper[4820]: I0201 14:45:35.209050 4820 generic.go:334] "Generic (PLEG): container finished" podID="10758666-3fd3-4e5b-9afc-56c22f714fba" containerID="5e39b15372a7cc7be94ed362c777fb49817d99d10390cbc0a10eb2720cf92df6" exitCode=0 Feb 01 14:45:35 crc kubenswrapper[4820]: I0201 14:45:35.214254 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" event={"ID":"10758666-3fd3-4e5b-9afc-56c22f714fba","Type":"ContainerDied","Data":"5e39b15372a7cc7be94ed362c777fb49817d99d10390cbc0a10eb2720cf92df6"} Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.602801 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.735039 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-bootstrap-combined-ca-bundle\") pod \"10758666-3fd3-4e5b-9afc-56c22f714fba\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.735161 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-inventory\") pod \"10758666-3fd3-4e5b-9afc-56c22f714fba\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.735332 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvm8z\" (UniqueName: \"kubernetes.io/projected/10758666-3fd3-4e5b-9afc-56c22f714fba-kube-api-access-pvm8z\") pod \"10758666-3fd3-4e5b-9afc-56c22f714fba\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.735364 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-ssh-key-openstack-edpm-ipam\") pod \"10758666-3fd3-4e5b-9afc-56c22f714fba\" (UID: \"10758666-3fd3-4e5b-9afc-56c22f714fba\") " Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.740851 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "10758666-3fd3-4e5b-9afc-56c22f714fba" (UID: "10758666-3fd3-4e5b-9afc-56c22f714fba"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.741462 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10758666-3fd3-4e5b-9afc-56c22f714fba-kube-api-access-pvm8z" (OuterVolumeSpecName: "kube-api-access-pvm8z") pod "10758666-3fd3-4e5b-9afc-56c22f714fba" (UID: "10758666-3fd3-4e5b-9afc-56c22f714fba"). InnerVolumeSpecName "kube-api-access-pvm8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.764028 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "10758666-3fd3-4e5b-9afc-56c22f714fba" (UID: "10758666-3fd3-4e5b-9afc-56c22f714fba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.776248 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-inventory" (OuterVolumeSpecName: "inventory") pod "10758666-3fd3-4e5b-9afc-56c22f714fba" (UID: "10758666-3fd3-4e5b-9afc-56c22f714fba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.838367 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.838417 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.838437 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvm8z\" (UniqueName: \"kubernetes.io/projected/10758666-3fd3-4e5b-9afc-56c22f714fba-kube-api-access-pvm8z\") on node \"crc\" DevicePath \"\"" Feb 01 14:45:36 crc kubenswrapper[4820]: I0201 14:45:36.838453 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10758666-3fd3-4e5b-9afc-56c22f714fba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.229761 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" event={"ID":"10758666-3fd3-4e5b-9afc-56c22f714fba","Type":"ContainerDied","Data":"f69d453f28ee6ce87cd23e9495f67ed3cb27c19bccd0e5fb72f10875c55aa721"} Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.230053 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69d453f28ee6ce87cd23e9495f67ed3cb27c19bccd0e5fb72f10875c55aa721" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.230119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.301059 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn"] Feb 01 14:45:37 crc kubenswrapper[4820]: E0201 14:45:37.301732 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10758666-3fd3-4e5b-9afc-56c22f714fba" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.301859 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="10758666-3fd3-4e5b-9afc-56c22f714fba" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 01 14:45:37 crc kubenswrapper[4820]: E0201 14:45:37.302008 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6821db-9b76-463b-b4ef-cbb8315b3666" containerName="collect-profiles" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.302101 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6821db-9b76-463b-b4ef-cbb8315b3666" containerName="collect-profiles" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.302458 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="10758666-3fd3-4e5b-9afc-56c22f714fba" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.302569 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6821db-9b76-463b-b4ef-cbb8315b3666" containerName="collect-profiles" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.303426 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.305314 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.305653 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.305824 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.308260 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.312285 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn"] Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.348691 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmg7m\" (UniqueName: \"kubernetes.io/projected/81fc5a7e-aabf-4edd-9710-1f4322485ab7-kube-api-access-dmg7m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.348777 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.348859 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.450517 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmg7m\" (UniqueName: \"kubernetes.io/projected/81fc5a7e-aabf-4edd-9710-1f4322485ab7-kube-api-access-dmg7m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.450604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.450715 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.453925 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.453966 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.466283 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmg7m\" (UniqueName: \"kubernetes.io/projected/81fc5a7e-aabf-4edd-9710-1f4322485ab7-kube-api-access-dmg7m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:37 crc kubenswrapper[4820]: I0201 14:45:37.618448 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:45:38 crc kubenswrapper[4820]: I0201 14:45:38.195473 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn"] Feb 01 14:45:38 crc kubenswrapper[4820]: I0201 14:45:38.238856 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" event={"ID":"81fc5a7e-aabf-4edd-9710-1f4322485ab7","Type":"ContainerStarted","Data":"b188a2c6013cbdd2ba7f958272ee6413c8a7e4fe2ae2900c34c28e731e8da611"} Feb 01 14:45:39 crc kubenswrapper[4820]: I0201 14:45:39.252313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" event={"ID":"81fc5a7e-aabf-4edd-9710-1f4322485ab7","Type":"ContainerStarted","Data":"21bfb168f68926066b3b94491bedf603dd145ad7e6e621572510bfd6dbadbed7"} Feb 01 14:45:39 crc kubenswrapper[4820]: I0201 14:45:39.268616 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" podStartSLOduration=1.737071148 podStartE2EDuration="2.26858433s" podCreationTimestamp="2026-02-01 14:45:37 +0000 UTC" firstStartedPulling="2026-02-01 14:45:38.188289664 +0000 UTC m=+1479.708655948" lastFinishedPulling="2026-02-01 14:45:38.719802846 +0000 UTC m=+1480.240169130" observedRunningTime="2026-02-01 14:45:39.266792996 +0000 UTC m=+1480.787159290" watchObservedRunningTime="2026-02-01 14:45:39.26858433 +0000 UTC m=+1480.788950654" Feb 01 14:45:49 crc kubenswrapper[4820]: I0201 14:45:49.242394 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:45:49 crc kubenswrapper[4820]: I0201 14:45:49.242967 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:45:59 crc kubenswrapper[4820]: I0201 14:45:59.761642 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fq6lx"] Feb 01 14:45:59 crc kubenswrapper[4820]: I0201 14:45:59.764689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:45:59 crc kubenswrapper[4820]: I0201 14:45:59.772247 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq6lx"] Feb 01 14:45:59 crc kubenswrapper[4820]: I0201 14:45:59.945291 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xxq4\" (UniqueName: \"kubernetes.io/projected/b631c6ba-bb5a-4946-8a52-e23f7203c288-kube-api-access-7xxq4\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:45:59 crc kubenswrapper[4820]: I0201 14:45:59.945376 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-catalog-content\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:45:59 crc kubenswrapper[4820]: I0201 14:45:59.945456 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-utilities\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:00 crc kubenswrapper[4820]: I0201 14:46:00.046510 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xxq4\" (UniqueName: \"kubernetes.io/projected/b631c6ba-bb5a-4946-8a52-e23f7203c288-kube-api-access-7xxq4\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:00 crc kubenswrapper[4820]: I0201 14:46:00.046575 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-catalog-content\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:00 crc kubenswrapper[4820]: I0201 14:46:00.046618 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-utilities\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:00 crc kubenswrapper[4820]: I0201 14:46:00.047220 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-catalog-content\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:00 crc kubenswrapper[4820]: I0201 14:46:00.047238 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-utilities\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:00 crc kubenswrapper[4820]: I0201 14:46:00.079688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xxq4\" (UniqueName: \"kubernetes.io/projected/b631c6ba-bb5a-4946-8a52-e23f7203c288-kube-api-access-7xxq4\") pod \"redhat-marketplace-fq6lx\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:00 crc kubenswrapper[4820]: I0201 14:46:00.082837 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:00 crc kubenswrapper[4820]: I0201 14:46:00.586381 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq6lx"] Feb 01 14:46:01 crc kubenswrapper[4820]: I0201 14:46:01.431097 4820 generic.go:334] "Generic (PLEG): container finished" podID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerID="a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1" exitCode=0 Feb 01 14:46:01 crc kubenswrapper[4820]: I0201 14:46:01.431147 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq6lx" event={"ID":"b631c6ba-bb5a-4946-8a52-e23f7203c288","Type":"ContainerDied","Data":"a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1"} Feb 01 14:46:01 crc kubenswrapper[4820]: I0201 14:46:01.431175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq6lx" event={"ID":"b631c6ba-bb5a-4946-8a52-e23f7203c288","Type":"ContainerStarted","Data":"220c558827ec9a89d8cf9ea190b4abf506fb057e03bdffd486d837ab71ad3797"} Feb 01 14:46:02 crc kubenswrapper[4820]: I0201 14:46:02.440067 4820 generic.go:334] "Generic (PLEG): container finished" podID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerID="66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea" exitCode=0 Feb 01 14:46:02 crc kubenswrapper[4820]: I0201 14:46:02.440399 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq6lx" event={"ID":"b631c6ba-bb5a-4946-8a52-e23f7203c288","Type":"ContainerDied","Data":"66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea"} Feb 01 14:46:03 crc kubenswrapper[4820]: I0201 14:46:03.448892 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq6lx" event={"ID":"b631c6ba-bb5a-4946-8a52-e23f7203c288","Type":"ContainerStarted","Data":"eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b"} Feb 01 14:46:03 crc kubenswrapper[4820]: I0201 14:46:03.467941 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fq6lx" podStartSLOduration=2.901952929 podStartE2EDuration="4.467919733s" podCreationTimestamp="2026-02-01 14:45:59 +0000 UTC" firstStartedPulling="2026-02-01 14:46:01.432811706 +0000 UTC m=+1502.953177990" lastFinishedPulling="2026-02-01 14:46:02.99877852 +0000 UTC m=+1504.519144794" observedRunningTime="2026-02-01 14:46:03.466415756 +0000 UTC m=+1504.986782040" watchObservedRunningTime="2026-02-01 14:46:03.467919733 +0000 UTC m=+1504.988286017" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.547618 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xpw4d"] Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.551739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.611247 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpw4d"] Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.722000 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6sj\" (UniqueName: \"kubernetes.io/projected/c6606962-84cd-47fe-8555-dd37b9f4f5ed-kube-api-access-wm6sj\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.722051 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6606962-84cd-47fe-8555-dd37b9f4f5ed-catalog-content\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.722372 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6606962-84cd-47fe-8555-dd37b9f4f5ed-utilities\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.824142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6606962-84cd-47fe-8555-dd37b9f4f5ed-utilities\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.824354 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6sj\" (UniqueName: \"kubernetes.io/projected/c6606962-84cd-47fe-8555-dd37b9f4f5ed-kube-api-access-wm6sj\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.824401 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6606962-84cd-47fe-8555-dd37b9f4f5ed-catalog-content\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.824776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6606962-84cd-47fe-8555-dd37b9f4f5ed-utilities\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.824898 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6606962-84cd-47fe-8555-dd37b9f4f5ed-catalog-content\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.845281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6sj\" (UniqueName: \"kubernetes.io/projected/c6606962-84cd-47fe-8555-dd37b9f4f5ed-kube-api-access-wm6sj\") pod \"certified-operators-xpw4d\" (UID: \"c6606962-84cd-47fe-8555-dd37b9f4f5ed\") " pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:05 crc kubenswrapper[4820]: I0201 14:46:05.942481 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:06 crc kubenswrapper[4820]: I0201 14:46:06.444795 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpw4d"] Feb 01 14:46:06 crc kubenswrapper[4820]: I0201 14:46:06.492549 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpw4d" event={"ID":"c6606962-84cd-47fe-8555-dd37b9f4f5ed","Type":"ContainerStarted","Data":"efddfe072d5a03713865eb706e45d9b349b2932389bcaa1a92ab621cd920d32a"} Feb 01 14:46:07 crc kubenswrapper[4820]: I0201 14:46:07.502866 4820 generic.go:334] "Generic (PLEG): container finished" podID="c6606962-84cd-47fe-8555-dd37b9f4f5ed" containerID="42d0c11a1a5e85d8bb7b1671846709c907b5e9ac5b8db2644f26dc147734b29d" exitCode=0 Feb 01 14:46:07 crc kubenswrapper[4820]: I0201 14:46:07.503215 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpw4d" event={"ID":"c6606962-84cd-47fe-8555-dd37b9f4f5ed","Type":"ContainerDied","Data":"42d0c11a1a5e85d8bb7b1671846709c907b5e9ac5b8db2644f26dc147734b29d"} Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.083030 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.083659 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.130283 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.344336 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-httfc"] Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.347159 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.361380 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-httfc"] Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.410727 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wf4\" (UniqueName: \"kubernetes.io/projected/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-kube-api-access-k8wf4\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.411025 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-catalog-content\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.411100 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-utilities\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.512738 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wf4\" (UniqueName: \"kubernetes.io/projected/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-kube-api-access-k8wf4\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.512910 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-catalog-content\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.512954 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-utilities\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.513568 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-catalog-content\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.513622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-utilities\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.531858 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wf4\" (UniqueName: \"kubernetes.io/projected/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-kube-api-access-k8wf4\") pod \"redhat-operators-httfc\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.584566 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:10 crc kubenswrapper[4820]: I0201 14:46:10.670689 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:12 crc kubenswrapper[4820]: I0201 14:46:12.537321 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq6lx"] Feb 01 14:46:12 crc kubenswrapper[4820]: I0201 14:46:12.542693 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fq6lx" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerName="registry-server" containerID="cri-o://eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b" gracePeriod=2 Feb 01 14:46:12 crc kubenswrapper[4820]: I0201 14:46:12.719272 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-httfc"] Feb 01 14:46:12 crc kubenswrapper[4820]: I0201 14:46:12.975063 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.074203 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-catalog-content\") pod \"b631c6ba-bb5a-4946-8a52-e23f7203c288\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.074255 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xxq4\" (UniqueName: \"kubernetes.io/projected/b631c6ba-bb5a-4946-8a52-e23f7203c288-kube-api-access-7xxq4\") pod \"b631c6ba-bb5a-4946-8a52-e23f7203c288\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.074439 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-utilities\") pod \"b631c6ba-bb5a-4946-8a52-e23f7203c288\" (UID: \"b631c6ba-bb5a-4946-8a52-e23f7203c288\") " Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.075041 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-utilities" (OuterVolumeSpecName: "utilities") pod "b631c6ba-bb5a-4946-8a52-e23f7203c288" (UID: "b631c6ba-bb5a-4946-8a52-e23f7203c288"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.079441 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b631c6ba-bb5a-4946-8a52-e23f7203c288-kube-api-access-7xxq4" (OuterVolumeSpecName: "kube-api-access-7xxq4") pod "b631c6ba-bb5a-4946-8a52-e23f7203c288" (UID: "b631c6ba-bb5a-4946-8a52-e23f7203c288"). InnerVolumeSpecName "kube-api-access-7xxq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.176972 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xxq4\" (UniqueName: \"kubernetes.io/projected/b631c6ba-bb5a-4946-8a52-e23f7203c288-kube-api-access-7xxq4\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.177024 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.373021 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b631c6ba-bb5a-4946-8a52-e23f7203c288" (UID: "b631c6ba-bb5a-4946-8a52-e23f7203c288"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.379992 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b631c6ba-bb5a-4946-8a52-e23f7203c288-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.553975 4820 generic.go:334] "Generic (PLEG): container finished" podID="c6606962-84cd-47fe-8555-dd37b9f4f5ed" containerID="4534c2f940bcc22fefaa3c3f93ea11021e19e8734ec6ed84e4945e2e98f8e3f8" exitCode=0 Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.555103 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpw4d" event={"ID":"c6606962-84cd-47fe-8555-dd37b9f4f5ed","Type":"ContainerDied","Data":"4534c2f940bcc22fefaa3c3f93ea11021e19e8734ec6ed84e4945e2e98f8e3f8"} Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.555459 4820 generic.go:334] "Generic (PLEG): container finished" podID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerID="449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646" exitCode=0 Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.555521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-httfc" event={"ID":"73a34a2c-7cd8-4e05-b440-aee1e4288aa4","Type":"ContainerDied","Data":"449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646"} Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.555548 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-httfc" event={"ID":"73a34a2c-7cd8-4e05-b440-aee1e4288aa4","Type":"ContainerStarted","Data":"da305a571a94acec064c2db3e024d6d1c8816b7e8c7fdfd7c0b4c180af34f8f0"} Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.558134 4820 generic.go:334] "Generic (PLEG): container finished" podID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerID="eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b" exitCode=0 Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.558190 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fq6lx" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.558203 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq6lx" event={"ID":"b631c6ba-bb5a-4946-8a52-e23f7203c288","Type":"ContainerDied","Data":"eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b"} Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.558455 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fq6lx" event={"ID":"b631c6ba-bb5a-4946-8a52-e23f7203c288","Type":"ContainerDied","Data":"220c558827ec9a89d8cf9ea190b4abf506fb057e03bdffd486d837ab71ad3797"} Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.558475 4820 scope.go:117] "RemoveContainer" containerID="eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.583733 4820 scope.go:117] "RemoveContainer" containerID="66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.645105 4820 scope.go:117] "RemoveContainer" containerID="a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.702632 4820 scope.go:117] "RemoveContainer" containerID="eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b" Feb 01 14:46:13 crc kubenswrapper[4820]: E0201 14:46:13.703239 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b\": container with ID starting with eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b not found: ID does not exist" containerID="eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.703329 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b"} err="failed to get container status \"eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b\": rpc error: code = NotFound desc = could not find container \"eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b\": container with ID starting with eca7dcf35c821a292e8160bd86cd1bce2c879198585192d774eddc0ba12f0c4b not found: ID does not exist" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.703370 4820 scope.go:117] "RemoveContainer" containerID="66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.703507 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq6lx"] Feb 01 14:46:13 crc kubenswrapper[4820]: E0201 14:46:13.703763 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea\": container with ID starting with 66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea not found: ID does not exist" containerID="66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.703795 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea"} err="failed to get container status \"66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea\": rpc error: code = NotFound desc = could not find container \"66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea\": container with ID starting with 66450fcb866be193e20501d685721506e8af5d8eeaa9a4afe2c7c3a73918c4ea not found: ID does not exist" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.703814 4820 scope.go:117] "RemoveContainer" containerID="a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1" Feb 01 14:46:13 crc kubenswrapper[4820]: E0201 14:46:13.707551 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1\": container with ID starting with a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1 not found: ID does not exist" containerID="a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.707591 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1"} err="failed to get container status \"a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1\": rpc error: code = NotFound desc = could not find container \"a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1\": container with ID starting with a14a84308d8dacd9c45713ddb4ce415c9049175a1194fe47ec28b4fd55e0acb1 not found: ID does not exist" Feb 01 14:46:13 crc kubenswrapper[4820]: I0201 14:46:13.715575 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fq6lx"] Feb 01 14:46:14 crc kubenswrapper[4820]: I0201 14:46:14.570099 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-httfc" event={"ID":"73a34a2c-7cd8-4e05-b440-aee1e4288aa4","Type":"ContainerStarted","Data":"3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120"} Feb 01 14:46:14 crc kubenswrapper[4820]: I0201 14:46:14.577208 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpw4d" event={"ID":"c6606962-84cd-47fe-8555-dd37b9f4f5ed","Type":"ContainerStarted","Data":"d56a9df71b8652588a0f7b9f69a79a9bf15d409759e25d5767a75ae93e1c5254"} Feb 01 14:46:14 crc kubenswrapper[4820]: I0201 14:46:14.610294 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xpw4d" podStartSLOduration=3.088780198 podStartE2EDuration="9.610277967s" podCreationTimestamp="2026-02-01 14:46:05 +0000 UTC" firstStartedPulling="2026-02-01 14:46:07.504935895 +0000 UTC m=+1509.025302179" lastFinishedPulling="2026-02-01 14:46:14.026433664 +0000 UTC m=+1515.546799948" observedRunningTime="2026-02-01 14:46:14.606480574 +0000 UTC m=+1516.126846858" watchObservedRunningTime="2026-02-01 14:46:14.610277967 +0000 UTC m=+1516.130644251" Feb 01 14:46:15 crc kubenswrapper[4820]: I0201 14:46:15.212213 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" path="/var/lib/kubelet/pods/b631c6ba-bb5a-4946-8a52-e23f7203c288/volumes" Feb 01 14:46:15 crc kubenswrapper[4820]: I0201 14:46:15.588150 4820 generic.go:334] "Generic (PLEG): container finished" podID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerID="3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120" exitCode=0 Feb 01 14:46:15 crc kubenswrapper[4820]: I0201 14:46:15.588200 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-httfc" event={"ID":"73a34a2c-7cd8-4e05-b440-aee1e4288aa4","Type":"ContainerDied","Data":"3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120"} Feb 01 14:46:15 crc kubenswrapper[4820]: I0201 14:46:15.943111 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:15 crc kubenswrapper[4820]: I0201 14:46:15.943166 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:16 crc kubenswrapper[4820]: I0201 14:46:16.598266 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-httfc" event={"ID":"73a34a2c-7cd8-4e05-b440-aee1e4288aa4","Type":"ContainerStarted","Data":"cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00"} Feb 01 14:46:16 crc kubenswrapper[4820]: I0201 14:46:16.617832 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-httfc" podStartSLOduration=4.133254411 podStartE2EDuration="6.61781348s" podCreationTimestamp="2026-02-01 14:46:10 +0000 UTC" firstStartedPulling="2026-02-01 14:46:13.55769791 +0000 UTC m=+1515.078064194" lastFinishedPulling="2026-02-01 14:46:16.042256979 +0000 UTC m=+1517.562623263" observedRunningTime="2026-02-01 14:46:16.611451643 +0000 UTC m=+1518.131817927" watchObservedRunningTime="2026-02-01 14:46:16.61781348 +0000 UTC m=+1518.138179764" Feb 01 14:46:16 crc kubenswrapper[4820]: I0201 14:46:16.989228 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xpw4d" podUID="c6606962-84cd-47fe-8555-dd37b9f4f5ed" containerName="registry-server" probeResult="failure" output=< Feb 01 14:46:16 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 01 14:46:16 crc kubenswrapper[4820]: > Feb 01 14:46:19 crc kubenswrapper[4820]: I0201 14:46:19.242438 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:46:19 crc kubenswrapper[4820]: I0201 14:46:19.242754 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:46:20 crc kubenswrapper[4820]: I0201 14:46:20.672713 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:20 crc kubenswrapper[4820]: I0201 14:46:20.673213 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:21 crc kubenswrapper[4820]: I0201 14:46:21.732490 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-httfc" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="registry-server" probeResult="failure" output=< Feb 01 14:46:21 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 01 14:46:21 crc kubenswrapper[4820]: > Feb 01 14:46:25 crc kubenswrapper[4820]: I0201 14:46:25.994000 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.039229 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xpw4d" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.104457 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpw4d"] Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.228449 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8nz9l"] Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.229060 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8nz9l" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" containerName="registry-server" containerID="cri-o://403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9" gracePeriod=2 Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.694967 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.697691 4820 generic.go:334] "Generic (PLEG): container finished" podID="32aaf442-6b0a-4415-a767-4fd051191e47" containerID="403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9" exitCode=0 Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.698391 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nz9l" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.698510 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nz9l" event={"ID":"32aaf442-6b0a-4415-a767-4fd051191e47","Type":"ContainerDied","Data":"403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9"} Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.698542 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nz9l" event={"ID":"32aaf442-6b0a-4415-a767-4fd051191e47","Type":"ContainerDied","Data":"4dc472ea2e3c1d87a651b36165cbf1c067b89ee5d06ae930ac08eaa0ccb5046a"} Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.698558 4820 scope.go:117] "RemoveContainer" containerID="403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.739728 4820 scope.go:117] "RemoveContainer" containerID="35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.770109 4820 scope.go:117] "RemoveContainer" containerID="aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.810261 4820 scope.go:117] "RemoveContainer" containerID="403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9" Feb 01 14:46:26 crc kubenswrapper[4820]: E0201 14:46:26.811240 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9\": container with ID starting with 403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9 not found: ID does not exist" containerID="403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.811272 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9"} err="failed to get container status \"403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9\": rpc error: code = NotFound desc = could not find container \"403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9\": container with ID starting with 403a2fcf12fd166155d1b97d079ab77fda336e83e9660ec3f0833c8c346da7a9 not found: ID does not exist" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.811294 4820 scope.go:117] "RemoveContainer" containerID="35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2" Feb 01 14:46:26 crc kubenswrapper[4820]: E0201 14:46:26.811721 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2\": container with ID starting with 35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2 not found: ID does not exist" containerID="35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.811760 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2"} err="failed to get container status \"35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2\": rpc error: code = NotFound desc = could not find container \"35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2\": container with ID starting with 35a9f5969033906534d2abd5ebb167399c21762b6acff87164e59d93ae6280a2 not found: ID does not exist" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.811776 4820 scope.go:117] "RemoveContainer" containerID="aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8" Feb 01 14:46:26 crc kubenswrapper[4820]: E0201 14:46:26.812365 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8\": container with ID starting with aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8 not found: ID does not exist" containerID="aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.812399 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8"} err="failed to get container status \"aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8\": rpc error: code = NotFound desc = could not find container \"aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8\": container with ID starting with aeae6a6147cf7e44ab7ddb369f64387953b9ad59c00acfdc47d047ddde7fb3b8 not found: ID does not exist" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.833395 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-catalog-content\") pod \"32aaf442-6b0a-4415-a767-4fd051191e47\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.833480 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv2br\" (UniqueName: \"kubernetes.io/projected/32aaf442-6b0a-4415-a767-4fd051191e47-kube-api-access-gv2br\") pod \"32aaf442-6b0a-4415-a767-4fd051191e47\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.833589 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-utilities\") pod \"32aaf442-6b0a-4415-a767-4fd051191e47\" (UID: \"32aaf442-6b0a-4415-a767-4fd051191e47\") " Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.836463 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-utilities" (OuterVolumeSpecName: "utilities") pod "32aaf442-6b0a-4415-a767-4fd051191e47" (UID: "32aaf442-6b0a-4415-a767-4fd051191e47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.845003 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32aaf442-6b0a-4415-a767-4fd051191e47-kube-api-access-gv2br" (OuterVolumeSpecName: "kube-api-access-gv2br") pod "32aaf442-6b0a-4415-a767-4fd051191e47" (UID: "32aaf442-6b0a-4415-a767-4fd051191e47"). InnerVolumeSpecName "kube-api-access-gv2br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.872557 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32aaf442-6b0a-4415-a767-4fd051191e47" (UID: "32aaf442-6b0a-4415-a767-4fd051191e47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.936690 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.936723 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv2br\" (UniqueName: \"kubernetes.io/projected/32aaf442-6b0a-4415-a767-4fd051191e47-kube-api-access-gv2br\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:26 crc kubenswrapper[4820]: I0201 14:46:26.936736 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32aaf442-6b0a-4415-a767-4fd051191e47-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:27 crc kubenswrapper[4820]: I0201 14:46:27.028009 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8nz9l"] Feb 01 14:46:27 crc kubenswrapper[4820]: I0201 14:46:27.036238 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8nz9l"] Feb 01 14:46:27 crc kubenswrapper[4820]: I0201 14:46:27.211482 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" path="/var/lib/kubelet/pods/32aaf442-6b0a-4415-a767-4fd051191e47/volumes" Feb 01 14:46:30 crc kubenswrapper[4820]: I0201 14:46:30.718034 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:30 crc kubenswrapper[4820]: I0201 14:46:30.762194 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:31 crc kubenswrapper[4820]: I0201 14:46:31.441791 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-httfc"] Feb 01 14:46:31 crc kubenswrapper[4820]: I0201 14:46:31.738962 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-httfc" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="registry-server" containerID="cri-o://cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00" gracePeriod=2 Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.208858 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.331115 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-utilities\") pod \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.331160 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wf4\" (UniqueName: \"kubernetes.io/projected/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-kube-api-access-k8wf4\") pod \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.331370 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-catalog-content\") pod \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\" (UID: \"73a34a2c-7cd8-4e05-b440-aee1e4288aa4\") " Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.333317 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-utilities" (OuterVolumeSpecName: "utilities") pod "73a34a2c-7cd8-4e05-b440-aee1e4288aa4" (UID: "73a34a2c-7cd8-4e05-b440-aee1e4288aa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.336849 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-kube-api-access-k8wf4" (OuterVolumeSpecName: "kube-api-access-k8wf4") pod "73a34a2c-7cd8-4e05-b440-aee1e4288aa4" (UID: "73a34a2c-7cd8-4e05-b440-aee1e4288aa4"). InnerVolumeSpecName "kube-api-access-k8wf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.434540 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.434588 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wf4\" (UniqueName: \"kubernetes.io/projected/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-kube-api-access-k8wf4\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.466927 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73a34a2c-7cd8-4e05-b440-aee1e4288aa4" (UID: "73a34a2c-7cd8-4e05-b440-aee1e4288aa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.536537 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a34a2c-7cd8-4e05-b440-aee1e4288aa4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.748573 4820 generic.go:334] "Generic (PLEG): container finished" podID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerID="cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00" exitCode=0 Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.748626 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-httfc" event={"ID":"73a34a2c-7cd8-4e05-b440-aee1e4288aa4","Type":"ContainerDied","Data":"cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00"} Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.748667 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-httfc" event={"ID":"73a34a2c-7cd8-4e05-b440-aee1e4288aa4","Type":"ContainerDied","Data":"da305a571a94acec064c2db3e024d6d1c8816b7e8c7fdfd7c0b4c180af34f8f0"} Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.748697 4820 scope.go:117] "RemoveContainer" containerID="cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.748698 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-httfc" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.770235 4820 scope.go:117] "RemoveContainer" containerID="3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.788380 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-httfc"] Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.797951 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-httfc"] Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.809840 4820 scope.go:117] "RemoveContainer" containerID="449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.833289 4820 scope.go:117] "RemoveContainer" containerID="cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00" Feb 01 14:46:32 crc kubenswrapper[4820]: E0201 14:46:32.833694 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00\": container with ID starting with cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00 not found: ID does not exist" containerID="cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.833726 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00"} err="failed to get container status \"cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00\": rpc error: code = NotFound desc = could not find container \"cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00\": container with ID starting with cd44f0db7536b2e3b1410951c328c73e721a50f5d53b6fd7461f746ea6f84a00 not found: ID does not exist" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.833746 4820 scope.go:117] "RemoveContainer" containerID="3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120" Feb 01 14:46:32 crc kubenswrapper[4820]: E0201 14:46:32.834080 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120\": container with ID starting with 3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120 not found: ID does not exist" containerID="3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.834104 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120"} err="failed to get container status \"3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120\": rpc error: code = NotFound desc = could not find container \"3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120\": container with ID starting with 3d62fa9c9543d35c0b5d8259fb98d0c956a2dfe37e983f36b480950fb7d80120 not found: ID does not exist" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.834122 4820 scope.go:117] "RemoveContainer" containerID="449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646" Feb 01 14:46:32 crc kubenswrapper[4820]: E0201 14:46:32.834518 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646\": container with ID starting with 449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646 not found: ID does not exist" containerID="449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646" Feb 01 14:46:32 crc kubenswrapper[4820]: I0201 14:46:32.834539 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646"} err="failed to get container status \"449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646\": rpc error: code = NotFound desc = could not find container \"449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646\": container with ID starting with 449fe867868181934cc52a6185e0ecb7ad8e53299a9db7c43232e9d3a3a12646 not found: ID does not exist" Feb 01 14:46:33 crc kubenswrapper[4820]: I0201 14:46:33.214352 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" path="/var/lib/kubelet/pods/73a34a2c-7cd8-4e05-b440-aee1e4288aa4/volumes" Feb 01 14:46:40 crc kubenswrapper[4820]: I0201 14:46:40.828345 4820 generic.go:334] "Generic (PLEG): container finished" podID="81fc5a7e-aabf-4edd-9710-1f4322485ab7" containerID="21bfb168f68926066b3b94491bedf603dd145ad7e6e621572510bfd6dbadbed7" exitCode=0 Feb 01 14:46:40 crc kubenswrapper[4820]: I0201 14:46:40.828440 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" event={"ID":"81fc5a7e-aabf-4edd-9710-1f4322485ab7","Type":"ContainerDied","Data":"21bfb168f68926066b3b94491bedf603dd145ad7e6e621572510bfd6dbadbed7"} Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.258168 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.413454 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-ssh-key-openstack-edpm-ipam\") pod \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.413493 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmg7m\" (UniqueName: \"kubernetes.io/projected/81fc5a7e-aabf-4edd-9710-1f4322485ab7-kube-api-access-dmg7m\") pod \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.413603 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-inventory\") pod \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\" (UID: \"81fc5a7e-aabf-4edd-9710-1f4322485ab7\") " Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.423007 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81fc5a7e-aabf-4edd-9710-1f4322485ab7-kube-api-access-dmg7m" (OuterVolumeSpecName: "kube-api-access-dmg7m") pod "81fc5a7e-aabf-4edd-9710-1f4322485ab7" (UID: "81fc5a7e-aabf-4edd-9710-1f4322485ab7"). InnerVolumeSpecName "kube-api-access-dmg7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.441534 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-inventory" (OuterVolumeSpecName: "inventory") pod "81fc5a7e-aabf-4edd-9710-1f4322485ab7" (UID: "81fc5a7e-aabf-4edd-9710-1f4322485ab7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.484723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "81fc5a7e-aabf-4edd-9710-1f4322485ab7" (UID: "81fc5a7e-aabf-4edd-9710-1f4322485ab7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.516601 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.516633 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmg7m\" (UniqueName: \"kubernetes.io/projected/81fc5a7e-aabf-4edd-9710-1f4322485ab7-kube-api-access-dmg7m\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.516643 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81fc5a7e-aabf-4edd-9710-1f4322485ab7-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.853916 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" event={"ID":"81fc5a7e-aabf-4edd-9710-1f4322485ab7","Type":"ContainerDied","Data":"b188a2c6013cbdd2ba7f958272ee6413c8a7e4fe2ae2900c34c28e731e8da611"} Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.853951 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b188a2c6013cbdd2ba7f958272ee6413c8a7e4fe2ae2900c34c28e731e8da611" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.854049 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.927978 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj"] Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928347 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="extract-content" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928363 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="extract-content" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928372 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" containerName="extract-utilities" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928379 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" containerName="extract-utilities" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928393 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="extract-utilities" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928399 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="extract-utilities" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928440 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928446 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928458 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928463 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928477 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81fc5a7e-aabf-4edd-9710-1f4322485ab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928483 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="81fc5a7e-aabf-4edd-9710-1f4322485ab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928497 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerName="extract-utilities" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928502 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerName="extract-utilities" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928514 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" containerName="extract-content" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928520 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" containerName="extract-content" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928530 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928535 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: E0201 14:46:42.928545 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerName="extract-content" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928551 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerName="extract-content" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928759 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="32aaf442-6b0a-4415-a767-4fd051191e47" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928777 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="81fc5a7e-aabf-4edd-9710-1f4322485ab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928789 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a34a2c-7cd8-4e05-b440-aee1e4288aa4" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.928820 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b631c6ba-bb5a-4946-8a52-e23f7203c288" containerName="registry-server" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.929426 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.931991 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.932040 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.932037 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.932229 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:46:42 crc kubenswrapper[4820]: I0201 14:46:42.943764 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj"] Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.127459 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.128005 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.128102 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glkc\" (UniqueName: \"kubernetes.io/projected/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-kube-api-access-9glkc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.229546 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9glkc\" (UniqueName: \"kubernetes.io/projected/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-kube-api-access-9glkc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.229660 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.229735 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.236432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.236561 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.249847 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glkc\" (UniqueName: \"kubernetes.io/projected/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-kube-api-access-9glkc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.257727 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.762503 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj"] Feb 01 14:46:43 crc kubenswrapper[4820]: W0201 14:46:43.768501 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae5e3ab3_9f27_45e0_bbd9_6d9ee72a8996.slice/crio-c30a8ad38280fc01d44e65b5e64f05a06ba1299241706b3daf57096b1e59dbd0 WatchSource:0}: Error finding container c30a8ad38280fc01d44e65b5e64f05a06ba1299241706b3daf57096b1e59dbd0: Status 404 returned error can't find the container with id c30a8ad38280fc01d44e65b5e64f05a06ba1299241706b3daf57096b1e59dbd0 Feb 01 14:46:43 crc kubenswrapper[4820]: I0201 14:46:43.861587 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" event={"ID":"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996","Type":"ContainerStarted","Data":"c30a8ad38280fc01d44e65b5e64f05a06ba1299241706b3daf57096b1e59dbd0"} Feb 01 14:46:44 crc kubenswrapper[4820]: I0201 14:46:44.869798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" event={"ID":"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996","Type":"ContainerStarted","Data":"3cab711350f57602690a3cae48aa2b9779003b95a050e8ed74cdd96e54650897"} Feb 01 14:46:44 crc kubenswrapper[4820]: I0201 14:46:44.894732 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" podStartSLOduration=2.4768869110000002 podStartE2EDuration="2.894709638s" podCreationTimestamp="2026-02-01 14:46:42 +0000 UTC" firstStartedPulling="2026-02-01 14:46:43.771520001 +0000 UTC m=+1545.291886285" lastFinishedPulling="2026-02-01 14:46:44.189342728 +0000 UTC m=+1545.709709012" observedRunningTime="2026-02-01 14:46:44.883131134 +0000 UTC m=+1546.403497438" watchObservedRunningTime="2026-02-01 14:46:44.894709638 +0000 UTC m=+1546.415075942" Feb 01 14:46:48 crc kubenswrapper[4820]: I0201 14:46:48.900844 4820 generic.go:334] "Generic (PLEG): container finished" podID="ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996" containerID="3cab711350f57602690a3cae48aa2b9779003b95a050e8ed74cdd96e54650897" exitCode=0 Feb 01 14:46:48 crc kubenswrapper[4820]: I0201 14:46:48.900974 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" event={"ID":"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996","Type":"ContainerDied","Data":"3cab711350f57602690a3cae48aa2b9779003b95a050e8ed74cdd96e54650897"} Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.243391 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.243452 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.243494 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.243974 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.244027 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" gracePeriod=600 Feb 01 14:46:49 crc kubenswrapper[4820]: E0201 14:46:49.373245 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.915484 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" exitCode=0 Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.915551 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b"} Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.915686 4820 scope.go:117] "RemoveContainer" containerID="2741853f96eda4464b2d41189acc694c25bdbd3ba9a0c898eedf42557ed1eae0" Feb 01 14:46:49 crc kubenswrapper[4820]: I0201 14:46:49.919147 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:46:49 crc kubenswrapper[4820]: E0201 14:46:49.919978 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.404986 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.565443 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-ssh-key-openstack-edpm-ipam\") pod \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.565618 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9glkc\" (UniqueName: \"kubernetes.io/projected/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-kube-api-access-9glkc\") pod \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.565808 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-inventory\") pod \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\" (UID: \"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996\") " Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.572547 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-kube-api-access-9glkc" (OuterVolumeSpecName: "kube-api-access-9glkc") pod "ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996" (UID: "ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996"). InnerVolumeSpecName "kube-api-access-9glkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.593921 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996" (UID: "ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.611126 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-inventory" (OuterVolumeSpecName: "inventory") pod "ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996" (UID: "ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.668364 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9glkc\" (UniqueName: \"kubernetes.io/projected/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-kube-api-access-9glkc\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.668419 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.668438 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.926614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" event={"ID":"ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996","Type":"ContainerDied","Data":"c30a8ad38280fc01d44e65b5e64f05a06ba1299241706b3daf57096b1e59dbd0"} Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.926657 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c30a8ad38280fc01d44e65b5e64f05a06ba1299241706b3daf57096b1e59dbd0" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.926709 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.985104 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2"] Feb 01 14:46:50 crc kubenswrapper[4820]: E0201 14:46:50.986347 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.986388 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.986800 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.988061 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.992514 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.994958 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:46:50 crc kubenswrapper[4820]: I0201 14:46:50.995150 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.001030 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.006110 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2"] Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.177331 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.177429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.177468 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5kwb\" (UniqueName: \"kubernetes.io/projected/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-kube-api-access-s5kwb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.280086 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.280306 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.280383 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5kwb\" (UniqueName: \"kubernetes.io/projected/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-kube-api-access-s5kwb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.285657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.285704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.297062 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5kwb\" (UniqueName: \"kubernetes.io/projected/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-kube-api-access-s5kwb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-j9lg2\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.325360 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.860614 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2"] Feb 01 14:46:51 crc kubenswrapper[4820]: I0201 14:46:51.940112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" event={"ID":"cc44983a-1bae-4d8c-b36c-b74d3d390cc5","Type":"ContainerStarted","Data":"02e26a823dce990a3da0ae71f9a5108a16096d09c64bd4d0deca703d9de37f4f"} Feb 01 14:46:52 crc kubenswrapper[4820]: I0201 14:46:52.951597 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" event={"ID":"cc44983a-1bae-4d8c-b36c-b74d3d390cc5","Type":"ContainerStarted","Data":"42d4a86a455f569ff05062c3cac3b4f313c72ec9704bf5a9127b71fce007bd08"} Feb 01 14:46:52 crc kubenswrapper[4820]: I0201 14:46:52.970547 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" podStartSLOduration=2.549271693 podStartE2EDuration="2.970530474s" podCreationTimestamp="2026-02-01 14:46:50 +0000 UTC" firstStartedPulling="2026-02-01 14:46:51.867670706 +0000 UTC m=+1553.388037010" lastFinishedPulling="2026-02-01 14:46:52.288929517 +0000 UTC m=+1553.809295791" observedRunningTime="2026-02-01 14:46:52.967957452 +0000 UTC m=+1554.488323776" watchObservedRunningTime="2026-02-01 14:46:52.970530474 +0000 UTC m=+1554.490896758" Feb 01 14:46:59 crc kubenswrapper[4820]: I0201 14:46:59.063651 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7cb0-account-create-update-cn5z4"] Feb 01 14:46:59 crc kubenswrapper[4820]: I0201 14:46:59.079589 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-h7kmt"] Feb 01 14:46:59 crc kubenswrapper[4820]: I0201 14:46:59.088262 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-h7kmt"] Feb 01 14:46:59 crc kubenswrapper[4820]: I0201 14:46:59.096696 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7cb0-account-create-update-cn5z4"] Feb 01 14:46:59 crc kubenswrapper[4820]: I0201 14:46:59.233327 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2adaa104-f99c-44b1-b7de-d9cb4b660fa3" path="/var/lib/kubelet/pods/2adaa104-f99c-44b1-b7de-d9cb4b660fa3/volumes" Feb 01 14:46:59 crc kubenswrapper[4820]: I0201 14:46:59.234153 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="645c5b61-b15a-4734-a5ab-49c05d6046ec" path="/var/lib/kubelet/pods/645c5b61-b15a-4734-a5ab-49c05d6046ec/volumes" Feb 01 14:47:01 crc kubenswrapper[4820]: I0201 14:47:01.199225 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:47:01 crc kubenswrapper[4820]: E0201 14:47:01.199884 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.025397 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6a2e-account-create-update-6zkcd"] Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.037982 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-tlfvt"] Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.045867 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5nbk7"] Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.054456 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6a2e-account-create-update-6zkcd"] Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.061735 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-tlfvt"] Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.068685 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5nbk7"] Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.207290 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef0e812-f6c4-4147-9437-163981354ea2" path="/var/lib/kubelet/pods/0ef0e812-f6c4-4147-9437-163981354ea2/volumes" Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.208074 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ece3bc3-7bab-4cd0-8875-1757a5f0b12b" path="/var/lib/kubelet/pods/9ece3bc3-7bab-4cd0-8875-1757a5f0b12b/volumes" Feb 01 14:47:03 crc kubenswrapper[4820]: I0201 14:47:03.208801 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04eb126-50a3-44e6-9741-345441291daf" path="/var/lib/kubelet/pods/a04eb126-50a3-44e6-9741-345441291daf/volumes" Feb 01 14:47:04 crc kubenswrapper[4820]: I0201 14:47:04.024305 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d75a-account-create-update-xbb66"] Feb 01 14:47:04 crc kubenswrapper[4820]: I0201 14:47:04.033392 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d75a-account-create-update-xbb66"] Feb 01 14:47:05 crc kubenswrapper[4820]: I0201 14:47:05.211135 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377152ee-4878-4651-b7b8-3b3612bae8aa" path="/var/lib/kubelet/pods/377152ee-4878-4651-b7b8-3b3612bae8aa/volumes" Feb 01 14:47:05 crc kubenswrapper[4820]: I0201 14:47:05.534693 4820 scope.go:117] "RemoveContainer" containerID="1c485e86d7d95a29723eaf1e539a0c40d6f9446e45011006a65dbf1107740cf4" Feb 01 14:47:05 crc kubenswrapper[4820]: I0201 14:47:05.560953 4820 scope.go:117] "RemoveContainer" containerID="da0352d44e4f9a6ddfb45c023114924718ae2ac92a19401fdd8548ed51e8ee7f" Feb 01 14:47:05 crc kubenswrapper[4820]: I0201 14:47:05.614095 4820 scope.go:117] "RemoveContainer" containerID="5e5271bdbdf6c39a476d3b11a54e3e70f92388f16c530762b9f6b5a97ca54581" Feb 01 14:47:05 crc kubenswrapper[4820]: I0201 14:47:05.643509 4820 scope.go:117] "RemoveContainer" containerID="a15949a1f0e4fa4a1f5d41f723c09121a502a96017e0c2bafb58d1a7e976b35f" Feb 01 14:47:05 crc kubenswrapper[4820]: I0201 14:47:05.702471 4820 scope.go:117] "RemoveContainer" containerID="4323276296c26d16988080cc1d89d9934aacc9a5c31e81c2ef5c77d007b60528" Feb 01 14:47:05 crc kubenswrapper[4820]: I0201 14:47:05.730119 4820 scope.go:117] "RemoveContainer" containerID="69ee4b575970bf44d5d120fa2ef8b20e277ab4ef2de30fff542301b2c5ca8686" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.260621 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j6snx"] Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.262734 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.272159 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6snx"] Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.341188 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-utilities\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.341473 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-catalog-content\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.341519 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9w9t\" (UniqueName: \"kubernetes.io/projected/0ee110e8-fe34-46aa-802e-94b01cc088da-kube-api-access-d9w9t\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.442905 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-utilities\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.443029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-catalog-content\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.443051 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9w9t\" (UniqueName: \"kubernetes.io/projected/0ee110e8-fe34-46aa-802e-94b01cc088da-kube-api-access-d9w9t\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.443730 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-utilities\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.443965 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-catalog-content\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.461481 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9w9t\" (UniqueName: \"kubernetes.io/projected/0ee110e8-fe34-46aa-802e-94b01cc088da-kube-api-access-d9w9t\") pod \"community-operators-j6snx\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:07 crc kubenswrapper[4820]: I0201 14:47:07.590450 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:08 crc kubenswrapper[4820]: I0201 14:47:08.091676 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6snx"] Feb 01 14:47:08 crc kubenswrapper[4820]: W0201 14:47:08.098181 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ee110e8_fe34_46aa_802e_94b01cc088da.slice/crio-ebafe680c68fbb372225e7ee0802dc34d2d0ca9de074345ce62f98d59d6d58ac WatchSource:0}: Error finding container ebafe680c68fbb372225e7ee0802dc34d2d0ca9de074345ce62f98d59d6d58ac: Status 404 returned error can't find the container with id ebafe680c68fbb372225e7ee0802dc34d2d0ca9de074345ce62f98d59d6d58ac Feb 01 14:47:09 crc kubenswrapper[4820]: I0201 14:47:09.093163 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerID="891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4" exitCode=0 Feb 01 14:47:09 crc kubenswrapper[4820]: I0201 14:47:09.093250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6snx" event={"ID":"0ee110e8-fe34-46aa-802e-94b01cc088da","Type":"ContainerDied","Data":"891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4"} Feb 01 14:47:09 crc kubenswrapper[4820]: I0201 14:47:09.093474 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6snx" event={"ID":"0ee110e8-fe34-46aa-802e-94b01cc088da","Type":"ContainerStarted","Data":"ebafe680c68fbb372225e7ee0802dc34d2d0ca9de074345ce62f98d59d6d58ac"} Feb 01 14:47:10 crc kubenswrapper[4820]: I0201 14:47:10.114617 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6snx" event={"ID":"0ee110e8-fe34-46aa-802e-94b01cc088da","Type":"ContainerStarted","Data":"3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e"} Feb 01 14:47:11 crc kubenswrapper[4820]: I0201 14:47:11.126796 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerID="3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e" exitCode=0 Feb 01 14:47:11 crc kubenswrapper[4820]: I0201 14:47:11.126831 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6snx" event={"ID":"0ee110e8-fe34-46aa-802e-94b01cc088da","Type":"ContainerDied","Data":"3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e"} Feb 01 14:47:11 crc kubenswrapper[4820]: I0201 14:47:11.127158 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6snx" event={"ID":"0ee110e8-fe34-46aa-802e-94b01cc088da","Type":"ContainerStarted","Data":"42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148"} Feb 01 14:47:11 crc kubenswrapper[4820]: I0201 14:47:11.152028 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j6snx" podStartSLOduration=2.769340364 podStartE2EDuration="4.152009839s" podCreationTimestamp="2026-02-01 14:47:07 +0000 UTC" firstStartedPulling="2026-02-01 14:47:09.097351601 +0000 UTC m=+1570.617717895" lastFinishedPulling="2026-02-01 14:47:10.480021086 +0000 UTC m=+1572.000387370" observedRunningTime="2026-02-01 14:47:11.146821121 +0000 UTC m=+1572.667187405" watchObservedRunningTime="2026-02-01 14:47:11.152009839 +0000 UTC m=+1572.672376123" Feb 01 14:47:12 crc kubenswrapper[4820]: I0201 14:47:12.198836 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:47:12 crc kubenswrapper[4820]: E0201 14:47:12.199448 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:47:17 crc kubenswrapper[4820]: I0201 14:47:17.071187 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ndtnv"] Feb 01 14:47:17 crc kubenswrapper[4820]: I0201 14:47:17.085492 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ndtnv"] Feb 01 14:47:17 crc kubenswrapper[4820]: I0201 14:47:17.218309 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7610cca2-7bcc-4b2d-8c5f-df32dee24ff6" path="/var/lib/kubelet/pods/7610cca2-7bcc-4b2d-8c5f-df32dee24ff6/volumes" Feb 01 14:47:17 crc kubenswrapper[4820]: I0201 14:47:17.591176 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:17 crc kubenswrapper[4820]: I0201 14:47:17.591263 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:17 crc kubenswrapper[4820]: I0201 14:47:17.650209 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:18 crc kubenswrapper[4820]: I0201 14:47:18.256445 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:18 crc kubenswrapper[4820]: I0201 14:47:18.325790 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6snx"] Feb 01 14:47:20 crc kubenswrapper[4820]: I0201 14:47:20.219390 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j6snx" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerName="registry-server" containerID="cri-o://42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148" gracePeriod=2 Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.171119 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.231216 4820 generic.go:334] "Generic (PLEG): container finished" podID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerID="42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148" exitCode=0 Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.231282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6snx" event={"ID":"0ee110e8-fe34-46aa-802e-94b01cc088da","Type":"ContainerDied","Data":"42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148"} Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.231327 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6snx" event={"ID":"0ee110e8-fe34-46aa-802e-94b01cc088da","Type":"ContainerDied","Data":"ebafe680c68fbb372225e7ee0802dc34d2d0ca9de074345ce62f98d59d6d58ac"} Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.231356 4820 scope.go:117] "RemoveContainer" containerID="42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.231544 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6snx" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.236077 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9w9t\" (UniqueName: \"kubernetes.io/projected/0ee110e8-fe34-46aa-802e-94b01cc088da-kube-api-access-d9w9t\") pod \"0ee110e8-fe34-46aa-802e-94b01cc088da\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.236509 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-utilities\") pod \"0ee110e8-fe34-46aa-802e-94b01cc088da\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.236593 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-catalog-content\") pod \"0ee110e8-fe34-46aa-802e-94b01cc088da\" (UID: \"0ee110e8-fe34-46aa-802e-94b01cc088da\") " Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.237868 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-utilities" (OuterVolumeSpecName: "utilities") pod "0ee110e8-fe34-46aa-802e-94b01cc088da" (UID: "0ee110e8-fe34-46aa-802e-94b01cc088da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.243090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee110e8-fe34-46aa-802e-94b01cc088da-kube-api-access-d9w9t" (OuterVolumeSpecName: "kube-api-access-d9w9t") pod "0ee110e8-fe34-46aa-802e-94b01cc088da" (UID: "0ee110e8-fe34-46aa-802e-94b01cc088da"). InnerVolumeSpecName "kube-api-access-d9w9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.261638 4820 scope.go:117] "RemoveContainer" containerID="3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.290827 4820 scope.go:117] "RemoveContainer" containerID="891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.296555 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ee110e8-fe34-46aa-802e-94b01cc088da" (UID: "0ee110e8-fe34-46aa-802e-94b01cc088da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.331183 4820 scope.go:117] "RemoveContainer" containerID="42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148" Feb 01 14:47:21 crc kubenswrapper[4820]: E0201 14:47:21.332431 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148\": container with ID starting with 42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148 not found: ID does not exist" containerID="42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.332468 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148"} err="failed to get container status \"42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148\": rpc error: code = NotFound desc = could not find container \"42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148\": container with ID starting with 42b469882ce801b310bf8396e0056159fb8c974a6e15daf0f52df3c306efa148 not found: ID does not exist" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.332490 4820 scope.go:117] "RemoveContainer" containerID="3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e" Feb 01 14:47:21 crc kubenswrapper[4820]: E0201 14:47:21.332958 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e\": container with ID starting with 3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e not found: ID does not exist" containerID="3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.332980 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e"} err="failed to get container status \"3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e\": rpc error: code = NotFound desc = could not find container \"3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e\": container with ID starting with 3f61a930e22dbe1b566c3e85ba4b9d6d20c04832bd0f895ce26a6fd238b4010e not found: ID does not exist" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.332992 4820 scope.go:117] "RemoveContainer" containerID="891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4" Feb 01 14:47:21 crc kubenswrapper[4820]: E0201 14:47:21.333513 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4\": container with ID starting with 891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4 not found: ID does not exist" containerID="891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.333562 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4"} err="failed to get container status \"891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4\": rpc error: code = NotFound desc = could not find container \"891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4\": container with ID starting with 891d9bcabc96ff0aa34deb13e10a8264fd10755d2256731493de63dfbd50eaf4 not found: ID does not exist" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.338795 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9w9t\" (UniqueName: \"kubernetes.io/projected/0ee110e8-fe34-46aa-802e-94b01cc088da-kube-api-access-d9w9t\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.338945 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.339032 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee110e8-fe34-46aa-802e-94b01cc088da-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.560557 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6snx"] Feb 01 14:47:21 crc kubenswrapper[4820]: I0201 14:47:21.568246 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j6snx"] Feb 01 14:47:23 crc kubenswrapper[4820]: I0201 14:47:23.214473 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" path="/var/lib/kubelet/pods/0ee110e8-fe34-46aa-802e-94b01cc088da/volumes" Feb 01 14:47:25 crc kubenswrapper[4820]: I0201 14:47:25.024015 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ww8ln"] Feb 01 14:47:25 crc kubenswrapper[4820]: I0201 14:47:25.030718 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ww8ln"] Feb 01 14:47:25 crc kubenswrapper[4820]: I0201 14:47:25.198922 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:47:25 crc kubenswrapper[4820]: E0201 14:47:25.199151 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:47:25 crc kubenswrapper[4820]: I0201 14:47:25.207132 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2181c8-3297-4f4a-b21a-51a8d2172a9c" path="/var/lib/kubelet/pods/de2181c8-3297-4f4a-b21a-51a8d2172a9c/volumes" Feb 01 14:47:27 crc kubenswrapper[4820]: I0201 14:47:27.288349 4820 generic.go:334] "Generic (PLEG): container finished" podID="cc44983a-1bae-4d8c-b36c-b74d3d390cc5" containerID="42d4a86a455f569ff05062c3cac3b4f313c72ec9704bf5a9127b71fce007bd08" exitCode=0 Feb 01 14:47:27 crc kubenswrapper[4820]: I0201 14:47:27.288444 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" event={"ID":"cc44983a-1bae-4d8c-b36c-b74d3d390cc5","Type":"ContainerDied","Data":"42d4a86a455f569ff05062c3cac3b4f313c72ec9704bf5a9127b71fce007bd08"} Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.713193 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.812747 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5kwb\" (UniqueName: \"kubernetes.io/projected/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-kube-api-access-s5kwb\") pod \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.812873 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-inventory\") pod \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.813045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-ssh-key-openstack-edpm-ipam\") pod \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\" (UID: \"cc44983a-1bae-4d8c-b36c-b74d3d390cc5\") " Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.833025 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-kube-api-access-s5kwb" (OuterVolumeSpecName: "kube-api-access-s5kwb") pod "cc44983a-1bae-4d8c-b36c-b74d3d390cc5" (UID: "cc44983a-1bae-4d8c-b36c-b74d3d390cc5"). InnerVolumeSpecName "kube-api-access-s5kwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.845627 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cc44983a-1bae-4d8c-b36c-b74d3d390cc5" (UID: "cc44983a-1bae-4d8c-b36c-b74d3d390cc5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.851218 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-inventory" (OuterVolumeSpecName: "inventory") pod "cc44983a-1bae-4d8c-b36c-b74d3d390cc5" (UID: "cc44983a-1bae-4d8c-b36c-b74d3d390cc5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.916337 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.916371 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5kwb\" (UniqueName: \"kubernetes.io/projected/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-kube-api-access-s5kwb\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:28 crc kubenswrapper[4820]: I0201 14:47:28.916381 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc44983a-1bae-4d8c-b36c-b74d3d390cc5-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.307056 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" event={"ID":"cc44983a-1bae-4d8c-b36c-b74d3d390cc5","Type":"ContainerDied","Data":"02e26a823dce990a3da0ae71f9a5108a16096d09c64bd4d0deca703d9de37f4f"} Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.307105 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e26a823dce990a3da0ae71f9a5108a16096d09c64bd4d0deca703d9de37f4f" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.307106 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.388674 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw"] Feb 01 14:47:29 crc kubenswrapper[4820]: E0201 14:47:29.389073 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerName="registry-server" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.389094 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerName="registry-server" Feb 01 14:47:29 crc kubenswrapper[4820]: E0201 14:47:29.389122 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerName="extract-content" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.389130 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerName="extract-content" Feb 01 14:47:29 crc kubenswrapper[4820]: E0201 14:47:29.389154 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc44983a-1bae-4d8c-b36c-b74d3d390cc5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.389165 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc44983a-1bae-4d8c-b36c-b74d3d390cc5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:47:29 crc kubenswrapper[4820]: E0201 14:47:29.389178 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerName="extract-utilities" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.389186 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerName="extract-utilities" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.389410 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc44983a-1bae-4d8c-b36c-b74d3d390cc5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.389427 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee110e8-fe34-46aa-802e-94b01cc088da" containerName="registry-server" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.390047 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.392855 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.392988 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.393068 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.393159 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.402565 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw"] Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.530682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.531600 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.531677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthkh\" (UniqueName: \"kubernetes.io/projected/71397bae-0f2c-4738-99b7-7bc158ab235b-kube-api-access-gthkh\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.633650 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.633707 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthkh\" (UniqueName: \"kubernetes.io/projected/71397bae-0f2c-4738-99b7-7bc158ab235b-kube-api-access-gthkh\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.633808 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.640624 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.640624 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.654975 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthkh\" (UniqueName: \"kubernetes.io/projected/71397bae-0f2c-4738-99b7-7bc158ab235b-kube-api-access-gthkh\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:29 crc kubenswrapper[4820]: I0201 14:47:29.707263 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:30 crc kubenswrapper[4820]: I0201 14:47:30.036605 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xl7k9"] Feb 01 14:47:30 crc kubenswrapper[4820]: I0201 14:47:30.044929 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xl7k9"] Feb 01 14:47:30 crc kubenswrapper[4820]: I0201 14:47:30.219578 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw"] Feb 01 14:47:30 crc kubenswrapper[4820]: I0201 14:47:30.221746 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 14:47:30 crc kubenswrapper[4820]: I0201 14:47:30.316756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" event={"ID":"71397bae-0f2c-4738-99b7-7bc158ab235b","Type":"ContainerStarted","Data":"1a4e5df1ce655bc3113cfd1f1ff8d86834b3471e6d523fce7d780107f22bf924"} Feb 01 14:47:31 crc kubenswrapper[4820]: I0201 14:47:31.207564 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45963d5-f579-41a2-81d5-399be2d3ff53" path="/var/lib/kubelet/pods/d45963d5-f579-41a2-81d5-399be2d3ff53/volumes" Feb 01 14:47:31 crc kubenswrapper[4820]: I0201 14:47:31.326305 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" event={"ID":"71397bae-0f2c-4738-99b7-7bc158ab235b","Type":"ContainerStarted","Data":"cd07d436baa9415ebd2ab1069512cb325b16d8c63101a15f838aec65b37b6bac"} Feb 01 14:47:31 crc kubenswrapper[4820]: I0201 14:47:31.352762 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" podStartSLOduration=1.9573450650000002 podStartE2EDuration="2.352744401s" podCreationTimestamp="2026-02-01 14:47:29 +0000 UTC" firstStartedPulling="2026-02-01 14:47:30.221489657 +0000 UTC m=+1591.741855941" lastFinishedPulling="2026-02-01 14:47:30.616888993 +0000 UTC m=+1592.137255277" observedRunningTime="2026-02-01 14:47:31.344014467 +0000 UTC m=+1592.864380751" watchObservedRunningTime="2026-02-01 14:47:31.352744401 +0000 UTC m=+1592.873110685" Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.029955 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-bdnlv"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.044108 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2f44-account-create-update-fsm6g"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.052821 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b3a6-account-create-update-t7bv2"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.061651 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-bdnlv"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.068968 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2f44-account-create-update-fsm6g"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.075290 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b3a6-account-create-update-t7bv2"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.081750 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-dda1-account-create-update-lfbkq"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.087904 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xzjhl"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.093960 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xzjhl"] Feb 01 14:47:34 crc kubenswrapper[4820]: I0201 14:47:34.102190 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-dda1-account-create-update-lfbkq"] Feb 01 14:47:35 crc kubenswrapper[4820]: I0201 14:47:35.210273 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5217c856-fc01-4221-a551-63e793f60558" path="/var/lib/kubelet/pods/5217c856-fc01-4221-a551-63e793f60558/volumes" Feb 01 14:47:35 crc kubenswrapper[4820]: I0201 14:47:35.211109 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905d71d8-55a5-42a9-a22c-bfc3c3eb3e19" path="/var/lib/kubelet/pods/905d71d8-55a5-42a9-a22c-bfc3c3eb3e19/volumes" Feb 01 14:47:35 crc kubenswrapper[4820]: I0201 14:47:35.211762 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b35ffca4-b990-4cf7-ac82-57b7f200ef24" path="/var/lib/kubelet/pods/b35ffca4-b990-4cf7-ac82-57b7f200ef24/volumes" Feb 01 14:47:35 crc kubenswrapper[4820]: I0201 14:47:35.212639 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db77c675-16f7-46c6-bb7a-6aa18e492772" path="/var/lib/kubelet/pods/db77c675-16f7-46c6-bb7a-6aa18e492772/volumes" Feb 01 14:47:35 crc kubenswrapper[4820]: I0201 14:47:35.213743 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa83c1db-779c-4e77-8def-c9acf6560e6f" path="/var/lib/kubelet/pods/fa83c1db-779c-4e77-8def-c9acf6560e6f/volumes" Feb 01 14:47:35 crc kubenswrapper[4820]: I0201 14:47:35.355794 4820 generic.go:334] "Generic (PLEG): container finished" podID="71397bae-0f2c-4738-99b7-7bc158ab235b" containerID="cd07d436baa9415ebd2ab1069512cb325b16d8c63101a15f838aec65b37b6bac" exitCode=0 Feb 01 14:47:35 crc kubenswrapper[4820]: I0201 14:47:35.355832 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" event={"ID":"71397bae-0f2c-4738-99b7-7bc158ab235b","Type":"ContainerDied","Data":"cd07d436baa9415ebd2ab1069512cb325b16d8c63101a15f838aec65b37b6bac"} Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.768418 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.854332 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-ssh-key-openstack-edpm-ipam\") pod \"71397bae-0f2c-4738-99b7-7bc158ab235b\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.854565 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-inventory\") pod \"71397bae-0f2c-4738-99b7-7bc158ab235b\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.854597 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gthkh\" (UniqueName: \"kubernetes.io/projected/71397bae-0f2c-4738-99b7-7bc158ab235b-kube-api-access-gthkh\") pod \"71397bae-0f2c-4738-99b7-7bc158ab235b\" (UID: \"71397bae-0f2c-4738-99b7-7bc158ab235b\") " Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.859483 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71397bae-0f2c-4738-99b7-7bc158ab235b-kube-api-access-gthkh" (OuterVolumeSpecName: "kube-api-access-gthkh") pod "71397bae-0f2c-4738-99b7-7bc158ab235b" (UID: "71397bae-0f2c-4738-99b7-7bc158ab235b"). InnerVolumeSpecName "kube-api-access-gthkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.879653 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-inventory" (OuterVolumeSpecName: "inventory") pod "71397bae-0f2c-4738-99b7-7bc158ab235b" (UID: "71397bae-0f2c-4738-99b7-7bc158ab235b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.880122 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71397bae-0f2c-4738-99b7-7bc158ab235b" (UID: "71397bae-0f2c-4738-99b7-7bc158ab235b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.957116 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.957153 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71397bae-0f2c-4738-99b7-7bc158ab235b-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:36 crc kubenswrapper[4820]: I0201 14:47:36.957162 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gthkh\" (UniqueName: \"kubernetes.io/projected/71397bae-0f2c-4738-99b7-7bc158ab235b-kube-api-access-gthkh\") on node \"crc\" DevicePath \"\"" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.374261 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" event={"ID":"71397bae-0f2c-4738-99b7-7bc158ab235b","Type":"ContainerDied","Data":"1a4e5df1ce655bc3113cfd1f1ff8d86834b3471e6d523fce7d780107f22bf924"} Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.374310 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a4e5df1ce655bc3113cfd1f1ff8d86834b3471e6d523fce7d780107f22bf924" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.374316 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.442477 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf"] Feb 01 14:47:37 crc kubenswrapper[4820]: E0201 14:47:37.443305 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71397bae-0f2c-4738-99b7-7bc158ab235b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.443332 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="71397bae-0f2c-4738-99b7-7bc158ab235b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.443583 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="71397bae-0f2c-4738-99b7-7bc158ab235b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.444303 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.451514 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.451516 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.451663 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.451728 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.456752 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf"] Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.568987 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.569355 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.569638 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mltrb\" (UniqueName: \"kubernetes.io/projected/68589532-7a9c-4722-97b0-4fac8e42fac7-kube-api-access-mltrb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.672082 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.672204 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mltrb\" (UniqueName: \"kubernetes.io/projected/68589532-7a9c-4722-97b0-4fac8e42fac7-kube-api-access-mltrb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.672419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.676295 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.676326 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.698891 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mltrb\" (UniqueName: \"kubernetes.io/projected/68589532-7a9c-4722-97b0-4fac8e42fac7-kube-api-access-mltrb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:37 crc kubenswrapper[4820]: I0201 14:47:37.769464 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:47:38 crc kubenswrapper[4820]: I0201 14:47:38.286891 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf"] Feb 01 14:47:38 crc kubenswrapper[4820]: I0201 14:47:38.383229 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" event={"ID":"68589532-7a9c-4722-97b0-4fac8e42fac7","Type":"ContainerStarted","Data":"354189a68b50e63dc5f571684f277cffce51664c3480699e7d1c503dc502c1ce"} Feb 01 14:47:39 crc kubenswrapper[4820]: I0201 14:47:39.206481 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:47:39 crc kubenswrapper[4820]: E0201 14:47:39.207278 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:47:39 crc kubenswrapper[4820]: I0201 14:47:39.393866 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" event={"ID":"68589532-7a9c-4722-97b0-4fac8e42fac7","Type":"ContainerStarted","Data":"812f87636cfb895a92bfe1161318c6aa66fa0b5fbab30b7eb144a68ba5bde0d1"} Feb 01 14:47:39 crc kubenswrapper[4820]: I0201 14:47:39.434987 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" podStartSLOduration=2.015905878 podStartE2EDuration="2.434964685s" podCreationTimestamp="2026-02-01 14:47:37 +0000 UTC" firstStartedPulling="2026-02-01 14:47:38.300071061 +0000 UTC m=+1599.820437345" lastFinishedPulling="2026-02-01 14:47:38.719129868 +0000 UTC m=+1600.239496152" observedRunningTime="2026-02-01 14:47:39.432155066 +0000 UTC m=+1600.952521350" watchObservedRunningTime="2026-02-01 14:47:39.434964685 +0000 UTC m=+1600.955330969" Feb 01 14:47:40 crc kubenswrapper[4820]: I0201 14:47:40.056048 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gm5js"] Feb 01 14:47:40 crc kubenswrapper[4820]: I0201 14:47:40.065552 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gm5js"] Feb 01 14:47:41 crc kubenswrapper[4820]: I0201 14:47:41.212356 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5488a765-644a-47bd-9665-afb4d8bdb6ea" path="/var/lib/kubelet/pods/5488a765-644a-47bd-9665-afb4d8bdb6ea/volumes" Feb 01 14:47:50 crc kubenswrapper[4820]: I0201 14:47:50.198968 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:47:50 crc kubenswrapper[4820]: E0201 14:47:50.199731 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:48:02 crc kubenswrapper[4820]: I0201 14:48:02.199552 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:48:02 crc kubenswrapper[4820]: E0201 14:48:02.200385 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:48:05 crc kubenswrapper[4820]: I0201 14:48:05.904843 4820 scope.go:117] "RemoveContainer" containerID="d38db8cb1c9eec263e78781fb8f1be799fa99d500537c7cfae96cfda8f636561" Feb 01 14:48:05 crc kubenswrapper[4820]: I0201 14:48:05.935066 4820 scope.go:117] "RemoveContainer" containerID="08d3c20305ac64a46fa6fb896129520c756141077db4cb2252bcae5497154021" Feb 01 14:48:06 crc kubenswrapper[4820]: I0201 14:48:06.012018 4820 scope.go:117] "RemoveContainer" containerID="b76059fba1e654409a9b4c07ad19076bccf9eee29f71b54cad3dce778b0e59ce" Feb 01 14:48:06 crc kubenswrapper[4820]: I0201 14:48:06.068098 4820 scope.go:117] "RemoveContainer" containerID="2ca19a73e4632511bdc1770de5ef889951ca5c50b306b3b5bba53d03d31d72a5" Feb 01 14:48:06 crc kubenswrapper[4820]: I0201 14:48:06.098253 4820 scope.go:117] "RemoveContainer" containerID="c1d557d577c9fe18f447431eca536ae30f9de1d5a7f254ed62c982e65c427bb5" Feb 01 14:48:06 crc kubenswrapper[4820]: I0201 14:48:06.160548 4820 scope.go:117] "RemoveContainer" containerID="9ede13446cca7394b3e83552878c4917baf91a9f555f02ab0ee279d42e0be258" Feb 01 14:48:06 crc kubenswrapper[4820]: I0201 14:48:06.211968 4820 scope.go:117] "RemoveContainer" containerID="7f1b9b5d1c27be0d153c62d77dd01e5d75c40b404fd991b695381835eb5f6882" Feb 01 14:48:06 crc kubenswrapper[4820]: I0201 14:48:06.244682 4820 scope.go:117] "RemoveContainer" containerID="73662602e50d54ebfc50b493bbc18a901f4c698f9ae8ce3a2c411f4cea2fd5a1" Feb 01 14:48:06 crc kubenswrapper[4820]: I0201 14:48:06.265200 4820 scope.go:117] "RemoveContainer" containerID="d1b7e818fbb848716f060bc006c43e68390e8ed317ef57472f2a61ff7cdda78d" Feb 01 14:48:11 crc kubenswrapper[4820]: I0201 14:48:11.062461 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vd4hp"] Feb 01 14:48:11 crc kubenswrapper[4820]: I0201 14:48:11.073460 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mg8xp"] Feb 01 14:48:11 crc kubenswrapper[4820]: I0201 14:48:11.082932 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vd4hp"] Feb 01 14:48:11 crc kubenswrapper[4820]: I0201 14:48:11.091617 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mg8xp"] Feb 01 14:48:11 crc kubenswrapper[4820]: I0201 14:48:11.210005 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb06733-7a89-4153-a16b-e69317c5f8a3" path="/var/lib/kubelet/pods/1cb06733-7a89-4153-a16b-e69317c5f8a3/volumes" Feb 01 14:48:11 crc kubenswrapper[4820]: I0201 14:48:11.210934 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e23cb3-1af3-46a1-9805-8bf8579b1991" path="/var/lib/kubelet/pods/82e23cb3-1af3-46a1-9805-8bf8579b1991/volumes" Feb 01 14:48:13 crc kubenswrapper[4820]: I0201 14:48:13.036033 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zr8wv"] Feb 01 14:48:13 crc kubenswrapper[4820]: I0201 14:48:13.046750 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zr8wv"] Feb 01 14:48:13 crc kubenswrapper[4820]: I0201 14:48:13.211565 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8596fa26-8ba1-4348-8493-3df37f0cfcaa" path="/var/lib/kubelet/pods/8596fa26-8ba1-4348-8493-3df37f0cfcaa/volumes" Feb 01 14:48:17 crc kubenswrapper[4820]: I0201 14:48:17.198866 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:48:17 crc kubenswrapper[4820]: E0201 14:48:17.199507 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:48:22 crc kubenswrapper[4820]: I0201 14:48:22.036526 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-s4w47"] Feb 01 14:48:22 crc kubenswrapper[4820]: I0201 14:48:22.044149 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-s4w47"] Feb 01 14:48:23 crc kubenswrapper[4820]: I0201 14:48:23.209063 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5668430a-a444-4146-b357-30f626e2e9d6" path="/var/lib/kubelet/pods/5668430a-a444-4146-b357-30f626e2e9d6/volumes" Feb 01 14:48:24 crc kubenswrapper[4820]: I0201 14:48:24.826120 4820 generic.go:334] "Generic (PLEG): container finished" podID="68589532-7a9c-4722-97b0-4fac8e42fac7" containerID="812f87636cfb895a92bfe1161318c6aa66fa0b5fbab30b7eb144a68ba5bde0d1" exitCode=0 Feb 01 14:48:24 crc kubenswrapper[4820]: I0201 14:48:24.826192 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" event={"ID":"68589532-7a9c-4722-97b0-4fac8e42fac7","Type":"ContainerDied","Data":"812f87636cfb895a92bfe1161318c6aa66fa0b5fbab30b7eb144a68ba5bde0d1"} Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.235666 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.426677 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-inventory\") pod \"68589532-7a9c-4722-97b0-4fac8e42fac7\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.426834 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mltrb\" (UniqueName: \"kubernetes.io/projected/68589532-7a9c-4722-97b0-4fac8e42fac7-kube-api-access-mltrb\") pod \"68589532-7a9c-4722-97b0-4fac8e42fac7\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.427060 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-ssh-key-openstack-edpm-ipam\") pod \"68589532-7a9c-4722-97b0-4fac8e42fac7\" (UID: \"68589532-7a9c-4722-97b0-4fac8e42fac7\") " Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.431881 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68589532-7a9c-4722-97b0-4fac8e42fac7-kube-api-access-mltrb" (OuterVolumeSpecName: "kube-api-access-mltrb") pod "68589532-7a9c-4722-97b0-4fac8e42fac7" (UID: "68589532-7a9c-4722-97b0-4fac8e42fac7"). InnerVolumeSpecName "kube-api-access-mltrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.451668 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-inventory" (OuterVolumeSpecName: "inventory") pod "68589532-7a9c-4722-97b0-4fac8e42fac7" (UID: "68589532-7a9c-4722-97b0-4fac8e42fac7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.452081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "68589532-7a9c-4722-97b0-4fac8e42fac7" (UID: "68589532-7a9c-4722-97b0-4fac8e42fac7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.531489 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.531575 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68589532-7a9c-4722-97b0-4fac8e42fac7-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.531598 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mltrb\" (UniqueName: \"kubernetes.io/projected/68589532-7a9c-4722-97b0-4fac8e42fac7-kube-api-access-mltrb\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.847200 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" event={"ID":"68589532-7a9c-4722-97b0-4fac8e42fac7","Type":"ContainerDied","Data":"354189a68b50e63dc5f571684f277cffce51664c3480699e7d1c503dc502c1ce"} Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.847313 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354189a68b50e63dc5f571684f277cffce51664c3480699e7d1c503dc502c1ce" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.847275 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.914019 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4rnxz"] Feb 01 14:48:26 crc kubenswrapper[4820]: E0201 14:48:26.914374 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68589532-7a9c-4722-97b0-4fac8e42fac7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.914392 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="68589532-7a9c-4722-97b0-4fac8e42fac7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.914542 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="68589532-7a9c-4722-97b0-4fac8e42fac7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.916581 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.918813 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.919284 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.919793 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.920984 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:48:26 crc kubenswrapper[4820]: I0201 14:48:26.940067 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4rnxz"] Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.039743 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7wpk\" (UniqueName: \"kubernetes.io/projected/bdfc658e-1be5-4a3d-af19-4863b5675303-kube-api-access-v7wpk\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.039908 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.039949 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.141433 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7wpk\" (UniqueName: \"kubernetes.io/projected/bdfc658e-1be5-4a3d-af19-4863b5675303-kube-api-access-v7wpk\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.141720 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.141753 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.146093 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.150551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.161548 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7wpk\" (UniqueName: \"kubernetes.io/projected/bdfc658e-1be5-4a3d-af19-4863b5675303-kube-api-access-v7wpk\") pod \"ssh-known-hosts-edpm-deployment-4rnxz\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.290469 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.804102 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4rnxz"] Feb 01 14:48:27 crc kubenswrapper[4820]: I0201 14:48:27.857624 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" event={"ID":"bdfc658e-1be5-4a3d-af19-4863b5675303","Type":"ContainerStarted","Data":"1e0ae4b1efeb6ee47929e3d5787f73ee728156d0405db41767bde30724bfcae3"} Feb 01 14:48:28 crc kubenswrapper[4820]: I0201 14:48:28.869962 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" event={"ID":"bdfc658e-1be5-4a3d-af19-4863b5675303","Type":"ContainerStarted","Data":"acc3e072988493e639a850fd978f9d569e6aa2683aab2f9f0037121208ca4da3"} Feb 01 14:48:28 crc kubenswrapper[4820]: I0201 14:48:28.910334 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" podStartSLOduration=2.449085266 podStartE2EDuration="2.910308264s" podCreationTimestamp="2026-02-01 14:48:26 +0000 UTC" firstStartedPulling="2026-02-01 14:48:27.808920147 +0000 UTC m=+1649.329286431" lastFinishedPulling="2026-02-01 14:48:28.270143135 +0000 UTC m=+1649.790509429" observedRunningTime="2026-02-01 14:48:28.900200687 +0000 UTC m=+1650.420566991" watchObservedRunningTime="2026-02-01 14:48:28.910308264 +0000 UTC m=+1650.430674558" Feb 01 14:48:30 crc kubenswrapper[4820]: I0201 14:48:30.035935 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wpqvp"] Feb 01 14:48:30 crc kubenswrapper[4820]: I0201 14:48:30.042476 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wpqvp"] Feb 01 14:48:31 crc kubenswrapper[4820]: I0201 14:48:31.209671 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857bc684-4d17-461f-9183-6c0a7ac89845" path="/var/lib/kubelet/pods/857bc684-4d17-461f-9183-6c0a7ac89845/volumes" Feb 01 14:48:32 crc kubenswrapper[4820]: I0201 14:48:32.198507 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:48:32 crc kubenswrapper[4820]: E0201 14:48:32.198752 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:48:34 crc kubenswrapper[4820]: I0201 14:48:34.923589 4820 generic.go:334] "Generic (PLEG): container finished" podID="bdfc658e-1be5-4a3d-af19-4863b5675303" containerID="acc3e072988493e639a850fd978f9d569e6aa2683aab2f9f0037121208ca4da3" exitCode=0 Feb 01 14:48:34 crc kubenswrapper[4820]: I0201 14:48:34.923684 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" event={"ID":"bdfc658e-1be5-4a3d-af19-4863b5675303","Type":"ContainerDied","Data":"acc3e072988493e639a850fd978f9d569e6aa2683aab2f9f0037121208ca4da3"} Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.370981 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.453963 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7wpk\" (UniqueName: \"kubernetes.io/projected/bdfc658e-1be5-4a3d-af19-4863b5675303-kube-api-access-v7wpk\") pod \"bdfc658e-1be5-4a3d-af19-4863b5675303\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.454046 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-inventory-0\") pod \"bdfc658e-1be5-4a3d-af19-4863b5675303\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.454257 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-ssh-key-openstack-edpm-ipam\") pod \"bdfc658e-1be5-4a3d-af19-4863b5675303\" (UID: \"bdfc658e-1be5-4a3d-af19-4863b5675303\") " Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.464676 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfc658e-1be5-4a3d-af19-4863b5675303-kube-api-access-v7wpk" (OuterVolumeSpecName: "kube-api-access-v7wpk") pod "bdfc658e-1be5-4a3d-af19-4863b5675303" (UID: "bdfc658e-1be5-4a3d-af19-4863b5675303"). InnerVolumeSpecName "kube-api-access-v7wpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.485636 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "bdfc658e-1be5-4a3d-af19-4863b5675303" (UID: "bdfc658e-1be5-4a3d-af19-4863b5675303"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.495645 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bdfc658e-1be5-4a3d-af19-4863b5675303" (UID: "bdfc658e-1be5-4a3d-af19-4863b5675303"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.556579 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.556797 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7wpk\" (UniqueName: \"kubernetes.io/projected/bdfc658e-1be5-4a3d-af19-4863b5675303-kube-api-access-v7wpk\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.556903 4820 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdfc658e-1be5-4a3d-af19-4863b5675303-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.946330 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" event={"ID":"bdfc658e-1be5-4a3d-af19-4863b5675303","Type":"ContainerDied","Data":"1e0ae4b1efeb6ee47929e3d5787f73ee728156d0405db41767bde30724bfcae3"} Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.946374 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e0ae4b1efeb6ee47929e3d5787f73ee728156d0405db41767bde30724bfcae3" Feb 01 14:48:36 crc kubenswrapper[4820]: I0201 14:48:36.946435 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4rnxz" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.016520 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j"] Feb 01 14:48:37 crc kubenswrapper[4820]: E0201 14:48:37.016929 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfc658e-1be5-4a3d-af19-4863b5675303" containerName="ssh-known-hosts-edpm-deployment" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.016941 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfc658e-1be5-4a3d-af19-4863b5675303" containerName="ssh-known-hosts-edpm-deployment" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.017114 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfc658e-1be5-4a3d-af19-4863b5675303" containerName="ssh-known-hosts-edpm-deployment" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.017739 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.021786 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.022127 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.022278 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.022474 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.025168 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j"] Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.069329 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhb7\" (UniqueName: \"kubernetes.io/projected/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-kube-api-access-8vhb7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.069485 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.069517 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: E0201 14:48:37.161811 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdfc658e_1be5_4a3d_af19_4863b5675303.slice/crio-1e0ae4b1efeb6ee47929e3d5787f73ee728156d0405db41767bde30724bfcae3\": RecentStats: unable to find data in memory cache]" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.170372 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.170411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.170505 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhb7\" (UniqueName: \"kubernetes.io/projected/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-kube-api-access-8vhb7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.173864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.173864 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.186657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhb7\" (UniqueName: \"kubernetes.io/projected/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-kube-api-access-8vhb7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vcs9j\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.385784 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.705854 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j"] Feb 01 14:48:37 crc kubenswrapper[4820]: I0201 14:48:37.961037 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" event={"ID":"ec1f8af4-d9dd-47eb-addb-f8247cff3b68","Type":"ContainerStarted","Data":"2c78ac369f774b2ee57972ffe9b11649af65a9e06798726333aa20d62bd0b431"} Feb 01 14:48:38 crc kubenswrapper[4820]: I0201 14:48:38.975183 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" event={"ID":"ec1f8af4-d9dd-47eb-addb-f8247cff3b68","Type":"ContainerStarted","Data":"89c18dee8e5420e9891c7c22fb640fd325a4d2fcc67399cac38572f758da65f9"} Feb 01 14:48:38 crc kubenswrapper[4820]: I0201 14:48:38.998601 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" podStartSLOduration=2.564632729 podStartE2EDuration="2.99857669s" podCreationTimestamp="2026-02-01 14:48:36 +0000 UTC" firstStartedPulling="2026-02-01 14:48:37.702680503 +0000 UTC m=+1659.223046797" lastFinishedPulling="2026-02-01 14:48:38.136624474 +0000 UTC m=+1659.656990758" observedRunningTime="2026-02-01 14:48:38.99525486 +0000 UTC m=+1660.515621224" watchObservedRunningTime="2026-02-01 14:48:38.99857669 +0000 UTC m=+1660.518943014" Feb 01 14:48:43 crc kubenswrapper[4820]: I0201 14:48:43.200639 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:48:43 crc kubenswrapper[4820]: E0201 14:48:43.201723 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:48:46 crc kubenswrapper[4820]: I0201 14:48:46.038977 4820 generic.go:334] "Generic (PLEG): container finished" podID="ec1f8af4-d9dd-47eb-addb-f8247cff3b68" containerID="89c18dee8e5420e9891c7c22fb640fd325a4d2fcc67399cac38572f758da65f9" exitCode=0 Feb 01 14:48:46 crc kubenswrapper[4820]: I0201 14:48:46.039079 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" event={"ID":"ec1f8af4-d9dd-47eb-addb-f8247cff3b68","Type":"ContainerDied","Data":"89c18dee8e5420e9891c7c22fb640fd325a4d2fcc67399cac38572f758da65f9"} Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.454373 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.647062 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-inventory\") pod \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.647136 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vhb7\" (UniqueName: \"kubernetes.io/projected/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-kube-api-access-8vhb7\") pod \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.647178 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-ssh-key-openstack-edpm-ipam\") pod \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\" (UID: \"ec1f8af4-d9dd-47eb-addb-f8247cff3b68\") " Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.652503 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-kube-api-access-8vhb7" (OuterVolumeSpecName: "kube-api-access-8vhb7") pod "ec1f8af4-d9dd-47eb-addb-f8247cff3b68" (UID: "ec1f8af4-d9dd-47eb-addb-f8247cff3b68"). InnerVolumeSpecName "kube-api-access-8vhb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.671083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-inventory" (OuterVolumeSpecName: "inventory") pod "ec1f8af4-d9dd-47eb-addb-f8247cff3b68" (UID: "ec1f8af4-d9dd-47eb-addb-f8247cff3b68"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.676783 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec1f8af4-d9dd-47eb-addb-f8247cff3b68" (UID: "ec1f8af4-d9dd-47eb-addb-f8247cff3b68"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.749825 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.750304 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vhb7\" (UniqueName: \"kubernetes.io/projected/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-kube-api-access-8vhb7\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:47 crc kubenswrapper[4820]: I0201 14:48:47.750330 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec1f8af4-d9dd-47eb-addb-f8247cff3b68-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.058806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" event={"ID":"ec1f8af4-d9dd-47eb-addb-f8247cff3b68","Type":"ContainerDied","Data":"2c78ac369f774b2ee57972ffe9b11649af65a9e06798726333aa20d62bd0b431"} Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.058855 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c78ac369f774b2ee57972ffe9b11649af65a9e06798726333aa20d62bd0b431" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.058890 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.179010 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x"] Feb 01 14:48:48 crc kubenswrapper[4820]: E0201 14:48:48.179377 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1f8af4-d9dd-47eb-addb-f8247cff3b68" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.179395 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1f8af4-d9dd-47eb-addb-f8247cff3b68" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.179615 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1f8af4-d9dd-47eb-addb-f8247cff3b68" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.180233 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.186645 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x"] Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.190533 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.190780 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.195837 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.196400 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.262054 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.262203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbd9j\" (UniqueName: \"kubernetes.io/projected/b503e0ef-9cdc-4034-b179-02a10457b229-kube-api-access-cbd9j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.262224 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.363497 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.363675 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbd9j\" (UniqueName: \"kubernetes.io/projected/b503e0ef-9cdc-4034-b179-02a10457b229-kube-api-access-cbd9j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.363704 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.367794 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.371970 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.380014 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbd9j\" (UniqueName: \"kubernetes.io/projected/b503e0ef-9cdc-4034-b179-02a10457b229-kube-api-access-cbd9j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.499555 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:48 crc kubenswrapper[4820]: I0201 14:48:48.835127 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x"] Feb 01 14:48:48 crc kubenswrapper[4820]: W0201 14:48:48.842049 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb503e0ef_9cdc_4034_b179_02a10457b229.slice/crio-10eb8649c35c5d67005f6e4fa7a7e6f3528ab311453ad23332c668dfe643e569 WatchSource:0}: Error finding container 10eb8649c35c5d67005f6e4fa7a7e6f3528ab311453ad23332c668dfe643e569: Status 404 returned error can't find the container with id 10eb8649c35c5d67005f6e4fa7a7e6f3528ab311453ad23332c668dfe643e569 Feb 01 14:48:49 crc kubenswrapper[4820]: I0201 14:48:49.069833 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" event={"ID":"b503e0ef-9cdc-4034-b179-02a10457b229","Type":"ContainerStarted","Data":"10eb8649c35c5d67005f6e4fa7a7e6f3528ab311453ad23332c668dfe643e569"} Feb 01 14:48:50 crc kubenswrapper[4820]: I0201 14:48:50.077392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" event={"ID":"b503e0ef-9cdc-4034-b179-02a10457b229","Type":"ContainerStarted","Data":"ed9fbff21ffe69393de869ba5a5a3c9383061651600625c842ce8e7a93c984ff"} Feb 01 14:48:50 crc kubenswrapper[4820]: I0201 14:48:50.110013 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" podStartSLOduration=1.657649961 podStartE2EDuration="2.109991781s" podCreationTimestamp="2026-02-01 14:48:48 +0000 UTC" firstStartedPulling="2026-02-01 14:48:48.845173324 +0000 UTC m=+1670.365539608" lastFinishedPulling="2026-02-01 14:48:49.297515144 +0000 UTC m=+1670.817881428" observedRunningTime="2026-02-01 14:48:50.099555217 +0000 UTC m=+1671.619921511" watchObservedRunningTime="2026-02-01 14:48:50.109991781 +0000 UTC m=+1671.630358065" Feb 01 14:48:55 crc kubenswrapper[4820]: I0201 14:48:55.200947 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:48:55 crc kubenswrapper[4820]: E0201 14:48:55.201695 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:48:58 crc kubenswrapper[4820]: I0201 14:48:58.154571 4820 generic.go:334] "Generic (PLEG): container finished" podID="b503e0ef-9cdc-4034-b179-02a10457b229" containerID="ed9fbff21ffe69393de869ba5a5a3c9383061651600625c842ce8e7a93c984ff" exitCode=0 Feb 01 14:48:58 crc kubenswrapper[4820]: I0201 14:48:58.154670 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" event={"ID":"b503e0ef-9cdc-4034-b179-02a10457b229","Type":"ContainerDied","Data":"ed9fbff21ffe69393de869ba5a5a3c9383061651600625c842ce8e7a93c984ff"} Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.635395 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.684591 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-ssh-key-openstack-edpm-ipam\") pod \"b503e0ef-9cdc-4034-b179-02a10457b229\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.684766 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-inventory\") pod \"b503e0ef-9cdc-4034-b179-02a10457b229\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.685238 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbd9j\" (UniqueName: \"kubernetes.io/projected/b503e0ef-9cdc-4034-b179-02a10457b229-kube-api-access-cbd9j\") pod \"b503e0ef-9cdc-4034-b179-02a10457b229\" (UID: \"b503e0ef-9cdc-4034-b179-02a10457b229\") " Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.690407 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b503e0ef-9cdc-4034-b179-02a10457b229-kube-api-access-cbd9j" (OuterVolumeSpecName: "kube-api-access-cbd9j") pod "b503e0ef-9cdc-4034-b179-02a10457b229" (UID: "b503e0ef-9cdc-4034-b179-02a10457b229"). InnerVolumeSpecName "kube-api-access-cbd9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.709030 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-inventory" (OuterVolumeSpecName: "inventory") pod "b503e0ef-9cdc-4034-b179-02a10457b229" (UID: "b503e0ef-9cdc-4034-b179-02a10457b229"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.712845 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b503e0ef-9cdc-4034-b179-02a10457b229" (UID: "b503e0ef-9cdc-4034-b179-02a10457b229"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.786840 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.786869 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbd9j\" (UniqueName: \"kubernetes.io/projected/b503e0ef-9cdc-4034-b179-02a10457b229-kube-api-access-cbd9j\") on node \"crc\" DevicePath \"\"" Feb 01 14:48:59 crc kubenswrapper[4820]: I0201 14:48:59.786893 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b503e0ef-9cdc-4034-b179-02a10457b229-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:49:00 crc kubenswrapper[4820]: I0201 14:49:00.176442 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" event={"ID":"b503e0ef-9cdc-4034-b179-02a10457b229","Type":"ContainerDied","Data":"10eb8649c35c5d67005f6e4fa7a7e6f3528ab311453ad23332c668dfe643e569"} Feb 01 14:49:00 crc kubenswrapper[4820]: I0201 14:49:00.176739 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10eb8649c35c5d67005f6e4fa7a7e6f3528ab311453ad23332c668dfe643e569" Feb 01 14:49:00 crc kubenswrapper[4820]: I0201 14:49:00.176500 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x" Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.047081 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q9xjc"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.061016 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d7rxb"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.079619 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0b48-account-create-update-l599b"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.089362 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f893-account-create-update-l5l4m"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.097633 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d7rxb"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.105444 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q9xjc"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.112293 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ftt9j"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.119028 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e179-account-create-update-mbqvz"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.127596 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0b48-account-create-update-l599b"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.135518 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f893-account-create-update-l5l4m"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.142904 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e179-account-create-update-mbqvz"] Feb 01 14:49:02 crc kubenswrapper[4820]: I0201 14:49:02.149758 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ftt9j"] Feb 01 14:49:03 crc kubenswrapper[4820]: I0201 14:49:03.215470 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2" path="/var/lib/kubelet/pods/093b36e0-2c27-4cbf-9c63-3a4fc7b6d8f2/volumes" Feb 01 14:49:03 crc kubenswrapper[4820]: I0201 14:49:03.216408 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756d7bda-e919-4ca5-8549-80f31cc37ac7" path="/var/lib/kubelet/pods/756d7bda-e919-4ca5-8549-80f31cc37ac7/volumes" Feb 01 14:49:03 crc kubenswrapper[4820]: I0201 14:49:03.217062 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75db30c4-7da9-44d6-ad1a-02b67d4e8a85" path="/var/lib/kubelet/pods/75db30c4-7da9-44d6-ad1a-02b67d4e8a85/volumes" Feb 01 14:49:03 crc kubenswrapper[4820]: I0201 14:49:03.217655 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e" path="/var/lib/kubelet/pods/86f33ac5-a0e0-4b27-8b7b-1876f44cdc4e/volumes" Feb 01 14:49:03 crc kubenswrapper[4820]: I0201 14:49:03.218819 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d299ca-c553-421d-bbd8-7aaebe472a6d" path="/var/lib/kubelet/pods/a9d299ca-c553-421d-bbd8-7aaebe472a6d/volumes" Feb 01 14:49:03 crc kubenswrapper[4820]: I0201 14:49:03.219442 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f099d076-3429-46a2-8592-97326229034c" path="/var/lib/kubelet/pods/f099d076-3429-46a2-8592-97326229034c/volumes" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.459074 4820 scope.go:117] "RemoveContainer" containerID="52a16d845830e10f1838d4cf45af8bd8bd00fbd402a134ade724aa12c06f8342" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.489317 4820 scope.go:117] "RemoveContainer" containerID="08bd69bcf00d21e3810db33270484aab1dacee8a10179b09f8452023b490e322" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.533027 4820 scope.go:117] "RemoveContainer" containerID="83f0a12e8039dcd8c3e9300c2d07347c0272cb54539306ef84e85e16f36aeb3c" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.580266 4820 scope.go:117] "RemoveContainer" containerID="540491a79e332edc73bbedc80aaf79b4a2df22da101d1884137685b5b7ff83a9" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.622813 4820 scope.go:117] "RemoveContainer" containerID="24576896bb05561754204963db5f31ce0a8396beffd0275c37125980d58b990b" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.670861 4820 scope.go:117] "RemoveContainer" containerID="c8d1951d4ed50234a117e9798814a5da75b8b7e9e34963963b674913b31b7f8b" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.712624 4820 scope.go:117] "RemoveContainer" containerID="a7d2a73a3aa9473404e974cca34cc6c012995d48099f1f8616bc193a93e55d6b" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.733772 4820 scope.go:117] "RemoveContainer" containerID="6f3247d86f67150168b22aa6a199e5c3a5a4b9ea1d4f28b6b3e34db834970357" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.754998 4820 scope.go:117] "RemoveContainer" containerID="aa0a5f8792601307a279d9ca93611812efbdfaba39271a4bb62827725f41699f" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.791329 4820 scope.go:117] "RemoveContainer" containerID="1549ff0345fd19a65d575dda8d07d9198415fd693a6c677d5e6c0758f760611d" Feb 01 14:49:06 crc kubenswrapper[4820]: I0201 14:49:06.812893 4820 scope.go:117] "RemoveContainer" containerID="5244e6bbf496ccea2ca747b01093f042b2d370a7e255bb50c57b39c6e4e75982" Feb 01 14:49:08 crc kubenswrapper[4820]: I0201 14:49:08.198554 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:49:08 crc kubenswrapper[4820]: E0201 14:49:08.198889 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:49:22 crc kubenswrapper[4820]: I0201 14:49:22.198727 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:49:22 crc kubenswrapper[4820]: E0201 14:49:22.199722 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:49:28 crc kubenswrapper[4820]: I0201 14:49:28.060392 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qqtrm"] Feb 01 14:49:28 crc kubenswrapper[4820]: I0201 14:49:28.069079 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qqtrm"] Feb 01 14:49:29 crc kubenswrapper[4820]: I0201 14:49:29.218082 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ebbdff-635e-4998-b84f-04dbe869ab4e" path="/var/lib/kubelet/pods/56ebbdff-635e-4998-b84f-04dbe869ab4e/volumes" Feb 01 14:49:34 crc kubenswrapper[4820]: I0201 14:49:34.199449 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:49:34 crc kubenswrapper[4820]: E0201 14:49:34.200282 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:49:48 crc kubenswrapper[4820]: I0201 14:49:48.199291 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:49:48 crc kubenswrapper[4820]: E0201 14:49:48.200117 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:49:49 crc kubenswrapper[4820]: I0201 14:49:49.044337 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-sg5pc"] Feb 01 14:49:49 crc kubenswrapper[4820]: I0201 14:49:49.050805 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-sg5pc"] Feb 01 14:49:49 crc kubenswrapper[4820]: I0201 14:49:49.208986 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df439e0e-3443-4c9f-b049-8a36a7e38d86" path="/var/lib/kubelet/pods/df439e0e-3443-4c9f-b049-8a36a7e38d86/volumes" Feb 01 14:49:51 crc kubenswrapper[4820]: I0201 14:49:51.045338 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s2kcz"] Feb 01 14:49:51 crc kubenswrapper[4820]: I0201 14:49:51.056719 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s2kcz"] Feb 01 14:49:51 crc kubenswrapper[4820]: I0201 14:49:51.209837 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fdf904-8a91-45c5-8f1a-0fd56291b77e" path="/var/lib/kubelet/pods/61fdf904-8a91-45c5-8f1a-0fd56291b77e/volumes" Feb 01 14:50:01 crc kubenswrapper[4820]: I0201 14:50:01.199655 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:50:01 crc kubenswrapper[4820]: E0201 14:50:01.200671 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:50:06 crc kubenswrapper[4820]: I0201 14:50:06.997565 4820 scope.go:117] "RemoveContainer" containerID="9391c88c3cfcefdabd8aaeea23209970dc3056fa71411160501848a498eb0f0b" Feb 01 14:50:07 crc kubenswrapper[4820]: I0201 14:50:07.042686 4820 scope.go:117] "RemoveContainer" containerID="93c87a737dbd9d33db0867aca8f0653957e885cdff0417cf6174889b4daaaf03" Feb 01 14:50:07 crc kubenswrapper[4820]: I0201 14:50:07.107025 4820 scope.go:117] "RemoveContainer" containerID="daf9089a3b95eeef2a674a610ec81c6509e38e44776d6f1e9bed566660967141" Feb 01 14:50:12 crc kubenswrapper[4820]: I0201 14:50:12.198566 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:50:12 crc kubenswrapper[4820]: E0201 14:50:12.199371 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:50:25 crc kubenswrapper[4820]: I0201 14:50:25.198828 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:50:25 crc kubenswrapper[4820]: E0201 14:50:25.199818 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:50:34 crc kubenswrapper[4820]: I0201 14:50:34.054402 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vbj27"] Feb 01 14:50:34 crc kubenswrapper[4820]: I0201 14:50:34.064076 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vbj27"] Feb 01 14:50:35 crc kubenswrapper[4820]: I0201 14:50:35.208300 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5964c28-fda2-4e6d-85cf-59e7bf1ec9de" path="/var/lib/kubelet/pods/b5964c28-fda2-4e6d-85cf-59e7bf1ec9de/volumes" Feb 01 14:50:37 crc kubenswrapper[4820]: I0201 14:50:37.199547 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:50:37 crc kubenswrapper[4820]: E0201 14:50:37.200161 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:50:51 crc kubenswrapper[4820]: I0201 14:50:51.199340 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:50:51 crc kubenswrapper[4820]: E0201 14:50:51.200182 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:51:04 crc kubenswrapper[4820]: I0201 14:51:04.198209 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:51:04 crc kubenswrapper[4820]: E0201 14:51:04.199111 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:51:07 crc kubenswrapper[4820]: I0201 14:51:07.220186 4820 scope.go:117] "RemoveContainer" containerID="76dfe0b4b7a5a7d86c565f56ca18fcd30a6dfa5626b5a9dffc541bea05e4ee74" Feb 01 14:51:19 crc kubenswrapper[4820]: I0201 14:51:19.208164 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:51:19 crc kubenswrapper[4820]: E0201 14:51:19.209616 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:51:31 crc kubenswrapper[4820]: I0201 14:51:31.199813 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:51:31 crc kubenswrapper[4820]: E0201 14:51:31.200564 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:51:43 crc kubenswrapper[4820]: I0201 14:51:43.199515 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:51:43 crc kubenswrapper[4820]: E0201 14:51:43.200706 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:51:56 crc kubenswrapper[4820]: I0201 14:51:56.198757 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:51:56 crc kubenswrapper[4820]: I0201 14:51:56.642397 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"e1ac13f57d76a51287898e8d454a95536e8fab51db09445ace007faf6813a62c"} Feb 01 14:52:24 crc kubenswrapper[4820]: E0201 14:52:24.158645 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:44056->38.102.83.73:46051: write tcp 38.102.83.73:44056->38.102.83.73:46051: write: broken pipe Feb 01 14:52:28 crc kubenswrapper[4820]: E0201 14:52:28.351359 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:44106->38.102.83.73:46051: write tcp 38.102.83.73:44106->38.102.83.73:46051: write: connection reset by peer Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.751927 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.764034 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.786067 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.793219 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.802594 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.812259 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.822014 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.833036 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.840069 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7tbsb"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.846246 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.852325 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vcs9j"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.857772 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-j9lg2"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.862867 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gmf5x"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.867979 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kxhmj"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.873288 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcdkf"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.879540 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4rnxz"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.884627 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bj8vn"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.889668 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kltxb"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.894958 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ljgjw"] Feb 01 14:52:46 crc kubenswrapper[4820]: I0201 14:52:46.900142 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4rnxz"] Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.208487 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10758666-3fd3-4e5b-9afc-56c22f714fba" path="/var/lib/kubelet/pods/10758666-3fd3-4e5b-9afc-56c22f714fba/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.209489 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583cc935-a059-4d89-8113-5f03c1ad96ca" path="/var/lib/kubelet/pods/583cc935-a059-4d89-8113-5f03c1ad96ca/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.210230 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68589532-7a9c-4722-97b0-4fac8e42fac7" path="/var/lib/kubelet/pods/68589532-7a9c-4722-97b0-4fac8e42fac7/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.210833 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71397bae-0f2c-4738-99b7-7bc158ab235b" path="/var/lib/kubelet/pods/71397bae-0f2c-4738-99b7-7bc158ab235b/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.212108 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81fc5a7e-aabf-4edd-9710-1f4322485ab7" path="/var/lib/kubelet/pods/81fc5a7e-aabf-4edd-9710-1f4322485ab7/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.213069 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996" path="/var/lib/kubelet/pods/ae5e3ab3-9f27-45e0-bbd9-6d9ee72a8996/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.213862 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b503e0ef-9cdc-4034-b179-02a10457b229" path="/var/lib/kubelet/pods/b503e0ef-9cdc-4034-b179-02a10457b229/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.215315 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfc658e-1be5-4a3d-af19-4863b5675303" path="/var/lib/kubelet/pods/bdfc658e-1be5-4a3d-af19-4863b5675303/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.215983 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc44983a-1bae-4d8c-b36c-b74d3d390cc5" path="/var/lib/kubelet/pods/cc44983a-1bae-4d8c-b36c-b74d3d390cc5/volumes" Feb 01 14:52:47 crc kubenswrapper[4820]: I0201 14:52:47.216645 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1f8af4-d9dd-47eb-addb-f8247cff3b68" path="/var/lib/kubelet/pods/ec1f8af4-d9dd-47eb-addb-f8247cff3b68/volumes" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.225363 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb"] Feb 01 14:52:52 crc kubenswrapper[4820]: E0201 14:52:52.226329 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b503e0ef-9cdc-4034-b179-02a10457b229" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.226345 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b503e0ef-9cdc-4034-b179-02a10457b229" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.226563 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b503e0ef-9cdc-4034-b179-02a10457b229" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.227297 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.230396 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.230594 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.231320 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.231457 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.238820 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb"] Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.248013 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.371094 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.371150 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.371377 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.371657 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.371776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cffr7\" (UniqueName: \"kubernetes.io/projected/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-kube-api-access-cffr7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.473015 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.473114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.473161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cffr7\" (UniqueName: \"kubernetes.io/projected/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-kube-api-access-cffr7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.473186 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.473222 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.479285 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.479674 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.480440 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.482739 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.498819 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cffr7\" (UniqueName: \"kubernetes.io/projected/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-kube-api-access-cffr7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:52 crc kubenswrapper[4820]: I0201 14:52:52.551942 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:52:53 crc kubenswrapper[4820]: I0201 14:52:53.060640 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb"] Feb 01 14:52:53 crc kubenswrapper[4820]: W0201 14:52:53.071008 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5297ced9_f7cd_4f62_8cbc_560f6395b5ea.slice/crio-4f7ef9c5370e0ca7766a0ded0b68aa23938d0e4ba98689b831eecf91e030a7af WatchSource:0}: Error finding container 4f7ef9c5370e0ca7766a0ded0b68aa23938d0e4ba98689b831eecf91e030a7af: Status 404 returned error can't find the container with id 4f7ef9c5370e0ca7766a0ded0b68aa23938d0e4ba98689b831eecf91e030a7af Feb 01 14:52:53 crc kubenswrapper[4820]: I0201 14:52:53.073113 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 14:52:53 crc kubenswrapper[4820]: I0201 14:52:53.143633 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" event={"ID":"5297ced9-f7cd-4f62-8cbc-560f6395b5ea","Type":"ContainerStarted","Data":"4f7ef9c5370e0ca7766a0ded0b68aa23938d0e4ba98689b831eecf91e030a7af"} Feb 01 14:52:54 crc kubenswrapper[4820]: I0201 14:52:54.155154 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" event={"ID":"5297ced9-f7cd-4f62-8cbc-560f6395b5ea","Type":"ContainerStarted","Data":"d3b1ebaee1da6e61c3dcdcc13b53234eeea9c721716b0d9665db32297192d134"} Feb 01 14:52:54 crc kubenswrapper[4820]: I0201 14:52:54.182230 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" podStartSLOduration=1.763250601 podStartE2EDuration="2.182209589s" podCreationTimestamp="2026-02-01 14:52:52 +0000 UTC" firstStartedPulling="2026-02-01 14:52:53.072897368 +0000 UTC m=+1914.593263652" lastFinishedPulling="2026-02-01 14:52:53.491856366 +0000 UTC m=+1915.012222640" observedRunningTime="2026-02-01 14:52:54.176438081 +0000 UTC m=+1915.696804375" watchObservedRunningTime="2026-02-01 14:52:54.182209589 +0000 UTC m=+1915.702575893" Feb 01 14:53:04 crc kubenswrapper[4820]: I0201 14:53:04.252500 4820 generic.go:334] "Generic (PLEG): container finished" podID="5297ced9-f7cd-4f62-8cbc-560f6395b5ea" containerID="d3b1ebaee1da6e61c3dcdcc13b53234eeea9c721716b0d9665db32297192d134" exitCode=0 Feb 01 14:53:04 crc kubenswrapper[4820]: I0201 14:53:04.253159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" event={"ID":"5297ced9-f7cd-4f62-8cbc-560f6395b5ea","Type":"ContainerDied","Data":"d3b1ebaee1da6e61c3dcdcc13b53234eeea9c721716b0d9665db32297192d134"} Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.653957 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.801707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ceph\") pod \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.802143 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ssh-key-openstack-edpm-ipam\") pod \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.802206 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cffr7\" (UniqueName: \"kubernetes.io/projected/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-kube-api-access-cffr7\") pod \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.802303 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-repo-setup-combined-ca-bundle\") pod \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.802451 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-inventory\") pod \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\" (UID: \"5297ced9-f7cd-4f62-8cbc-560f6395b5ea\") " Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.806918 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-kube-api-access-cffr7" (OuterVolumeSpecName: "kube-api-access-cffr7") pod "5297ced9-f7cd-4f62-8cbc-560f6395b5ea" (UID: "5297ced9-f7cd-4f62-8cbc-560f6395b5ea"). InnerVolumeSpecName "kube-api-access-cffr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.807129 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ceph" (OuterVolumeSpecName: "ceph") pod "5297ced9-f7cd-4f62-8cbc-560f6395b5ea" (UID: "5297ced9-f7cd-4f62-8cbc-560f6395b5ea"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.810018 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5297ced9-f7cd-4f62-8cbc-560f6395b5ea" (UID: "5297ced9-f7cd-4f62-8cbc-560f6395b5ea"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.825075 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5297ced9-f7cd-4f62-8cbc-560f6395b5ea" (UID: "5297ced9-f7cd-4f62-8cbc-560f6395b5ea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.825525 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-inventory" (OuterVolumeSpecName: "inventory") pod "5297ced9-f7cd-4f62-8cbc-560f6395b5ea" (UID: "5297ced9-f7cd-4f62-8cbc-560f6395b5ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.904850 4820 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.904914 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.904931 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.904943 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:05.904957 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cffr7\" (UniqueName: \"kubernetes.io/projected/5297ced9-f7cd-4f62-8cbc-560f6395b5ea-kube-api-access-cffr7\") on node \"crc\" DevicePath \"\"" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.268721 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" event={"ID":"5297ced9-f7cd-4f62-8cbc-560f6395b5ea","Type":"ContainerDied","Data":"4f7ef9c5370e0ca7766a0ded0b68aa23938d0e4ba98689b831eecf91e030a7af"} Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.268760 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f7ef9c5370e0ca7766a0ded0b68aa23938d0e4ba98689b831eecf91e030a7af" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.268822 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.334279 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh"] Feb 01 14:53:06 crc kubenswrapper[4820]: E0201 14:53:06.334643 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5297ced9-f7cd-4f62-8cbc-560f6395b5ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.334663 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5297ced9-f7cd-4f62-8cbc-560f6395b5ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.334822 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5297ced9-f7cd-4f62-8cbc-560f6395b5ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.335398 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.338839 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.339205 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.339385 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.339486 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.339587 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.346421 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh"] Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.416494 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwmh\" (UniqueName: \"kubernetes.io/projected/8fd3481e-91d0-45af-9796-cdca54b2b647-kube-api-access-6rwmh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.416563 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.416665 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.416696 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.416833 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.518802 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.519081 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rwmh\" (UniqueName: \"kubernetes.io/projected/8fd3481e-91d0-45af-9796-cdca54b2b647-kube-api-access-6rwmh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.519126 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.519202 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.519240 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.524975 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.525450 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.526326 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.527193 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.542514 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rwmh\" (UniqueName: \"kubernetes.io/projected/8fd3481e-91d0-45af-9796-cdca54b2b647-kube-api-access-6rwmh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:06 crc kubenswrapper[4820]: I0201 14:53:06.663273 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:53:07 crc kubenswrapper[4820]: I0201 14:53:07.190672 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh"] Feb 01 14:53:07 crc kubenswrapper[4820]: I0201 14:53:07.277384 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" event={"ID":"8fd3481e-91d0-45af-9796-cdca54b2b647","Type":"ContainerStarted","Data":"01386ba72d03dea958db4dd7fb1a31ea37ff52824424bdfe6cac9ff866c326a9"} Feb 01 14:53:07 crc kubenswrapper[4820]: I0201 14:53:07.351644 4820 scope.go:117] "RemoveContainer" containerID="42d4a86a455f569ff05062c3cac3b4f313c72ec9704bf5a9127b71fce007bd08" Feb 01 14:53:07 crc kubenswrapper[4820]: I0201 14:53:07.401167 4820 scope.go:117] "RemoveContainer" containerID="21bfb168f68926066b3b94491bedf603dd145ad7e6e621572510bfd6dbadbed7" Feb 01 14:53:07 crc kubenswrapper[4820]: I0201 14:53:07.434335 4820 scope.go:117] "RemoveContainer" containerID="bbdad66549df31e3945ef80a25cb22592210e3e4216b5c7f892a1a4fece6dec3" Feb 01 14:53:07 crc kubenswrapper[4820]: I0201 14:53:07.455137 4820 scope.go:117] "RemoveContainer" containerID="3cab711350f57602690a3cae48aa2b9779003b95a050e8ed74cdd96e54650897" Feb 01 14:53:07 crc kubenswrapper[4820]: I0201 14:53:07.480144 4820 scope.go:117] "RemoveContainer" containerID="5e39b15372a7cc7be94ed362c777fb49817d99d10390cbc0a10eb2720cf92df6" Feb 01 14:53:08 crc kubenswrapper[4820]: I0201 14:53:08.306752 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" event={"ID":"8fd3481e-91d0-45af-9796-cdca54b2b647","Type":"ContainerStarted","Data":"12c1563090f08e3d1d19e2bed05bf85b3887761be34348aa19150bd130ae2a14"} Feb 01 14:53:08 crc kubenswrapper[4820]: I0201 14:53:08.333850 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" podStartSLOduration=1.892814328 podStartE2EDuration="2.333805451s" podCreationTimestamp="2026-02-01 14:53:06 +0000 UTC" firstStartedPulling="2026-02-01 14:53:07.196856002 +0000 UTC m=+1928.717222286" lastFinishedPulling="2026-02-01 14:53:07.637847115 +0000 UTC m=+1929.158213409" observedRunningTime="2026-02-01 14:53:08.329798075 +0000 UTC m=+1929.850164369" watchObservedRunningTime="2026-02-01 14:53:08.333805451 +0000 UTC m=+1929.854171735" Feb 01 14:54:07 crc kubenswrapper[4820]: I0201 14:54:07.614984 4820 scope.go:117] "RemoveContainer" containerID="812f87636cfb895a92bfe1161318c6aa66fa0b5fbab30b7eb144a68ba5bde0d1" Feb 01 14:54:07 crc kubenswrapper[4820]: I0201 14:54:07.682747 4820 scope.go:117] "RemoveContainer" containerID="cd07d436baa9415ebd2ab1069512cb325b16d8c63101a15f838aec65b37b6bac" Feb 01 14:54:19 crc kubenswrapper[4820]: I0201 14:54:19.242632 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:54:19 crc kubenswrapper[4820]: I0201 14:54:19.243325 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:54:39 crc kubenswrapper[4820]: I0201 14:54:39.691519 4820 generic.go:334] "Generic (PLEG): container finished" podID="8fd3481e-91d0-45af-9796-cdca54b2b647" containerID="12c1563090f08e3d1d19e2bed05bf85b3887761be34348aa19150bd130ae2a14" exitCode=0 Feb 01 14:54:39 crc kubenswrapper[4820]: I0201 14:54:39.691609 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" event={"ID":"8fd3481e-91d0-45af-9796-cdca54b2b647","Type":"ContainerDied","Data":"12c1563090f08e3d1d19e2bed05bf85b3887761be34348aa19150bd130ae2a14"} Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.188297 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.257652 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-inventory\") pod \"8fd3481e-91d0-45af-9796-cdca54b2b647\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.257704 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ssh-key-openstack-edpm-ipam\") pod \"8fd3481e-91d0-45af-9796-cdca54b2b647\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.257792 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-bootstrap-combined-ca-bundle\") pod \"8fd3481e-91d0-45af-9796-cdca54b2b647\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.257815 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ceph\") pod \"8fd3481e-91d0-45af-9796-cdca54b2b647\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.257892 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rwmh\" (UniqueName: \"kubernetes.io/projected/8fd3481e-91d0-45af-9796-cdca54b2b647-kube-api-access-6rwmh\") pod \"8fd3481e-91d0-45af-9796-cdca54b2b647\" (UID: \"8fd3481e-91d0-45af-9796-cdca54b2b647\") " Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.263583 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8fd3481e-91d0-45af-9796-cdca54b2b647" (UID: "8fd3481e-91d0-45af-9796-cdca54b2b647"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.267053 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ceph" (OuterVolumeSpecName: "ceph") pod "8fd3481e-91d0-45af-9796-cdca54b2b647" (UID: "8fd3481e-91d0-45af-9796-cdca54b2b647"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.280794 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd3481e-91d0-45af-9796-cdca54b2b647-kube-api-access-6rwmh" (OuterVolumeSpecName: "kube-api-access-6rwmh") pod "8fd3481e-91d0-45af-9796-cdca54b2b647" (UID: "8fd3481e-91d0-45af-9796-cdca54b2b647"). InnerVolumeSpecName "kube-api-access-6rwmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.284134 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8fd3481e-91d0-45af-9796-cdca54b2b647" (UID: "8fd3481e-91d0-45af-9796-cdca54b2b647"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.285647 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-inventory" (OuterVolumeSpecName: "inventory") pod "8fd3481e-91d0-45af-9796-cdca54b2b647" (UID: "8fd3481e-91d0-45af-9796-cdca54b2b647"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.360475 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.360516 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.360535 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rwmh\" (UniqueName: \"kubernetes.io/projected/8fd3481e-91d0-45af-9796-cdca54b2b647-kube-api-access-6rwmh\") on node \"crc\" DevicePath \"\"" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.360555 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.360567 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd3481e-91d0-45af-9796-cdca54b2b647-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.710191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" event={"ID":"8fd3481e-91d0-45af-9796-cdca54b2b647","Type":"ContainerDied","Data":"01386ba72d03dea958db4dd7fb1a31ea37ff52824424bdfe6cac9ff866c326a9"} Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.710236 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01386ba72d03dea958db4dd7fb1a31ea37ff52824424bdfe6cac9ff866c326a9" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.710258 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.798032 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d"] Feb 01 14:54:41 crc kubenswrapper[4820]: E0201 14:54:41.798974 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd3481e-91d0-45af-9796-cdca54b2b647" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.799006 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd3481e-91d0-45af-9796-cdca54b2b647" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.799372 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd3481e-91d0-45af-9796-cdca54b2b647" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.800424 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.805313 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d"] Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.805616 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.805907 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.806097 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.806467 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.806663 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.869119 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.869180 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.869228 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.869290 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfdqz\" (UniqueName: \"kubernetes.io/projected/ccf053de-7e43-4c69-88c0-89c1b4d4832e-kube-api-access-kfdqz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.970550 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.970614 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.970660 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.970692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfdqz\" (UniqueName: \"kubernetes.io/projected/ccf053de-7e43-4c69-88c0-89c1b4d4832e-kube-api-access-kfdqz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.975399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.975457 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.976562 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:41 crc kubenswrapper[4820]: I0201 14:54:41.987172 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfdqz\" (UniqueName: \"kubernetes.io/projected/ccf053de-7e43-4c69-88c0-89c1b4d4832e-kube-api-access-kfdqz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58x6d\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:42 crc kubenswrapper[4820]: I0201 14:54:42.126020 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:54:42 crc kubenswrapper[4820]: I0201 14:54:42.664359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d"] Feb 01 14:54:42 crc kubenswrapper[4820]: I0201 14:54:42.719388 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" event={"ID":"ccf053de-7e43-4c69-88c0-89c1b4d4832e","Type":"ContainerStarted","Data":"f68f53ac009f8ae08ce96374413a671182af9e000c06e8d5f99724ec8f1e9eb8"} Feb 01 14:54:43 crc kubenswrapper[4820]: I0201 14:54:43.727451 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" event={"ID":"ccf053de-7e43-4c69-88c0-89c1b4d4832e","Type":"ContainerStarted","Data":"9e64a875b1e885976f9d14761d375b3b8fdd626364eafff6a8112b7253cbccce"} Feb 01 14:54:43 crc kubenswrapper[4820]: I0201 14:54:43.746798 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" podStartSLOduration=2.349642919 podStartE2EDuration="2.746779877s" podCreationTimestamp="2026-02-01 14:54:41 +0000 UTC" firstStartedPulling="2026-02-01 14:54:42.673211569 +0000 UTC m=+2024.193577863" lastFinishedPulling="2026-02-01 14:54:43.070348537 +0000 UTC m=+2024.590714821" observedRunningTime="2026-02-01 14:54:43.740723123 +0000 UTC m=+2025.261089407" watchObservedRunningTime="2026-02-01 14:54:43.746779877 +0000 UTC m=+2025.267146161" Feb 01 14:54:49 crc kubenswrapper[4820]: I0201 14:54:49.242888 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:54:49 crc kubenswrapper[4820]: I0201 14:54:49.243509 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:55:06 crc kubenswrapper[4820]: I0201 14:55:06.909637 4820 generic.go:334] "Generic (PLEG): container finished" podID="ccf053de-7e43-4c69-88c0-89c1b4d4832e" containerID="9e64a875b1e885976f9d14761d375b3b8fdd626364eafff6a8112b7253cbccce" exitCode=0 Feb 01 14:55:06 crc kubenswrapper[4820]: I0201 14:55:06.909774 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" event={"ID":"ccf053de-7e43-4c69-88c0-89c1b4d4832e","Type":"ContainerDied","Data":"9e64a875b1e885976f9d14761d375b3b8fdd626364eafff6a8112b7253cbccce"} Feb 01 14:55:07 crc kubenswrapper[4820]: I0201 14:55:07.757376 4820 scope.go:117] "RemoveContainer" containerID="acc3e072988493e639a850fd978f9d569e6aa2683aab2f9f0037121208ca4da3" Feb 01 14:55:07 crc kubenswrapper[4820]: I0201 14:55:07.798273 4820 scope.go:117] "RemoveContainer" containerID="ed9fbff21ffe69393de869ba5a5a3c9383061651600625c842ce8e7a93c984ff" Feb 01 14:55:07 crc kubenswrapper[4820]: I0201 14:55:07.838095 4820 scope.go:117] "RemoveContainer" containerID="89c18dee8e5420e9891c7c22fb640fd325a4d2fcc67399cac38572f758da65f9" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.353907 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.384993 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-inventory\") pod \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.385057 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ceph\") pod \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.385105 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfdqz\" (UniqueName: \"kubernetes.io/projected/ccf053de-7e43-4c69-88c0-89c1b4d4832e-kube-api-access-kfdqz\") pod \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.385166 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ssh-key-openstack-edpm-ipam\") pod \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\" (UID: \"ccf053de-7e43-4c69-88c0-89c1b4d4832e\") " Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.391199 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ceph" (OuterVolumeSpecName: "ceph") pod "ccf053de-7e43-4c69-88c0-89c1b4d4832e" (UID: "ccf053de-7e43-4c69-88c0-89c1b4d4832e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.393216 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf053de-7e43-4c69-88c0-89c1b4d4832e-kube-api-access-kfdqz" (OuterVolumeSpecName: "kube-api-access-kfdqz") pod "ccf053de-7e43-4c69-88c0-89c1b4d4832e" (UID: "ccf053de-7e43-4c69-88c0-89c1b4d4832e"). InnerVolumeSpecName "kube-api-access-kfdqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.414101 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ccf053de-7e43-4c69-88c0-89c1b4d4832e" (UID: "ccf053de-7e43-4c69-88c0-89c1b4d4832e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.421474 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-inventory" (OuterVolumeSpecName: "inventory") pod "ccf053de-7e43-4c69-88c0-89c1b4d4832e" (UID: "ccf053de-7e43-4c69-88c0-89c1b4d4832e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.487393 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfdqz\" (UniqueName: \"kubernetes.io/projected/ccf053de-7e43-4c69-88c0-89c1b4d4832e-kube-api-access-kfdqz\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.487426 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.487436 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.487444 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccf053de-7e43-4c69-88c0-89c1b4d4832e-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.945194 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" event={"ID":"ccf053de-7e43-4c69-88c0-89c1b4d4832e","Type":"ContainerDied","Data":"f68f53ac009f8ae08ce96374413a671182af9e000c06e8d5f99724ec8f1e9eb8"} Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.945232 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f68f53ac009f8ae08ce96374413a671182af9e000c06e8d5f99724ec8f1e9eb8" Feb 01 14:55:08 crc kubenswrapper[4820]: I0201 14:55:08.945281 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58x6d" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.006496 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv"] Feb 01 14:55:09 crc kubenswrapper[4820]: E0201 14:55:09.006869 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf053de-7e43-4c69-88c0-89c1b4d4832e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.006905 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf053de-7e43-4c69-88c0-89c1b4d4832e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.007071 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf053de-7e43-4c69-88c0-89c1b4d4832e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.007614 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.010824 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.010844 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.011040 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.011068 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.011413 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.022674 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv"] Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.095814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.096041 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.096238 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwktf\" (UniqueName: \"kubernetes.io/projected/72f3578c-0dac-4edd-ad36-85a7b7930c01-kube-api-access-rwktf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.096505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.197793 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.198205 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwktf\" (UniqueName: \"kubernetes.io/projected/72f3578c-0dac-4edd-ad36-85a7b7930c01-kube-api-access-rwktf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.198263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.198294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.201701 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.201735 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.202113 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.215254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwktf\" (UniqueName: \"kubernetes.io/projected/72f3578c-0dac-4edd-ad36-85a7b7930c01-kube-api-access-rwktf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.327229 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.912391 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv"] Feb 01 14:55:09 crc kubenswrapper[4820]: I0201 14:55:09.954376 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" event={"ID":"72f3578c-0dac-4edd-ad36-85a7b7930c01","Type":"ContainerStarted","Data":"4e65c2bc32ef270ef653ee55202fa20a225a58beaab9d623c630ad564d9ff2b9"} Feb 01 14:55:10 crc kubenswrapper[4820]: I0201 14:55:10.964011 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" event={"ID":"72f3578c-0dac-4edd-ad36-85a7b7930c01","Type":"ContainerStarted","Data":"9d952b59416d63969161c9759288e2d1dfca24367e4babdd3baf17ffdb90e430"} Feb 01 14:55:10 crc kubenswrapper[4820]: I0201 14:55:10.992534 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" podStartSLOduration=2.485016856 podStartE2EDuration="2.992486884s" podCreationTimestamp="2026-02-01 14:55:08 +0000 UTC" firstStartedPulling="2026-02-01 14:55:09.916373143 +0000 UTC m=+2051.436739437" lastFinishedPulling="2026-02-01 14:55:10.423843181 +0000 UTC m=+2051.944209465" observedRunningTime="2026-02-01 14:55:10.98900018 +0000 UTC m=+2052.509366474" watchObservedRunningTime="2026-02-01 14:55:10.992486884 +0000 UTC m=+2052.512853208" Feb 01 14:55:16 crc kubenswrapper[4820]: I0201 14:55:16.169520 4820 generic.go:334] "Generic (PLEG): container finished" podID="72f3578c-0dac-4edd-ad36-85a7b7930c01" containerID="9d952b59416d63969161c9759288e2d1dfca24367e4babdd3baf17ffdb90e430" exitCode=0 Feb 01 14:55:16 crc kubenswrapper[4820]: I0201 14:55:16.169593 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" event={"ID":"72f3578c-0dac-4edd-ad36-85a7b7930c01","Type":"ContainerDied","Data":"9d952b59416d63969161c9759288e2d1dfca24367e4babdd3baf17ffdb90e430"} Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.565780 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.654405 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-inventory\") pod \"72f3578c-0dac-4edd-ad36-85a7b7930c01\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.654489 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwktf\" (UniqueName: \"kubernetes.io/projected/72f3578c-0dac-4edd-ad36-85a7b7930c01-kube-api-access-rwktf\") pod \"72f3578c-0dac-4edd-ad36-85a7b7930c01\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.654578 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ssh-key-openstack-edpm-ipam\") pod \"72f3578c-0dac-4edd-ad36-85a7b7930c01\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.654738 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ceph\") pod \"72f3578c-0dac-4edd-ad36-85a7b7930c01\" (UID: \"72f3578c-0dac-4edd-ad36-85a7b7930c01\") " Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.660112 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f3578c-0dac-4edd-ad36-85a7b7930c01-kube-api-access-rwktf" (OuterVolumeSpecName: "kube-api-access-rwktf") pod "72f3578c-0dac-4edd-ad36-85a7b7930c01" (UID: "72f3578c-0dac-4edd-ad36-85a7b7930c01"). InnerVolumeSpecName "kube-api-access-rwktf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.661071 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ceph" (OuterVolumeSpecName: "ceph") pod "72f3578c-0dac-4edd-ad36-85a7b7930c01" (UID: "72f3578c-0dac-4edd-ad36-85a7b7930c01"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.679582 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-inventory" (OuterVolumeSpecName: "inventory") pod "72f3578c-0dac-4edd-ad36-85a7b7930c01" (UID: "72f3578c-0dac-4edd-ad36-85a7b7930c01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.683006 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72f3578c-0dac-4edd-ad36-85a7b7930c01" (UID: "72f3578c-0dac-4edd-ad36-85a7b7930c01"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.757703 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.757737 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.757748 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72f3578c-0dac-4edd-ad36-85a7b7930c01-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:17 crc kubenswrapper[4820]: I0201 14:55:17.757757 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwktf\" (UniqueName: \"kubernetes.io/projected/72f3578c-0dac-4edd-ad36-85a7b7930c01-kube-api-access-rwktf\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.194378 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" event={"ID":"72f3578c-0dac-4edd-ad36-85a7b7930c01","Type":"ContainerDied","Data":"4e65c2bc32ef270ef653ee55202fa20a225a58beaab9d623c630ad564d9ff2b9"} Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.194689 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e65c2bc32ef270ef653ee55202fa20a225a58beaab9d623c630ad564d9ff2b9" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.194459 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.268583 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq"] Feb 01 14:55:18 crc kubenswrapper[4820]: E0201 14:55:18.269169 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f3578c-0dac-4edd-ad36-85a7b7930c01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.269237 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f3578c-0dac-4edd-ad36-85a7b7930c01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.269456 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f3578c-0dac-4edd-ad36-85a7b7930c01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.270156 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.284368 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq"] Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.307752 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.308101 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.308199 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.308589 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.309064 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.366232 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49gjl\" (UniqueName: \"kubernetes.io/projected/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-kube-api-access-49gjl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.366500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.366865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.367088 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.468669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.468744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.468785 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.468884 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49gjl\" (UniqueName: \"kubernetes.io/projected/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-kube-api-access-49gjl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.473960 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.474059 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.475584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.488635 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49gjl\" (UniqueName: \"kubernetes.io/projected/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-kube-api-access-49gjl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f8qrq\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:18 crc kubenswrapper[4820]: I0201 14:55:18.630928 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:19 crc kubenswrapper[4820]: I0201 14:55:19.140370 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq"] Feb 01 14:55:19 crc kubenswrapper[4820]: I0201 14:55:19.215118 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" event={"ID":"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee","Type":"ContainerStarted","Data":"201e65c9d164a62b40b7eb9b19b6420e801c6b5e63e26b3c59dc7f193ed57eeb"} Feb 01 14:55:19 crc kubenswrapper[4820]: I0201 14:55:19.242931 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:55:19 crc kubenswrapper[4820]: I0201 14:55:19.243030 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:55:19 crc kubenswrapper[4820]: I0201 14:55:19.243107 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:55:19 crc kubenswrapper[4820]: I0201 14:55:19.244181 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1ac13f57d76a51287898e8d454a95536e8fab51db09445ace007faf6813a62c"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:55:19 crc kubenswrapper[4820]: I0201 14:55:19.244282 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://e1ac13f57d76a51287898e8d454a95536e8fab51db09445ace007faf6813a62c" gracePeriod=600 Feb 01 14:55:20 crc kubenswrapper[4820]: I0201 14:55:20.214919 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="e1ac13f57d76a51287898e8d454a95536e8fab51db09445ace007faf6813a62c" exitCode=0 Feb 01 14:55:20 crc kubenswrapper[4820]: I0201 14:55:20.215098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"e1ac13f57d76a51287898e8d454a95536e8fab51db09445ace007faf6813a62c"} Feb 01 14:55:20 crc kubenswrapper[4820]: I0201 14:55:20.215512 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23"} Feb 01 14:55:20 crc kubenswrapper[4820]: I0201 14:55:20.215534 4820 scope.go:117] "RemoveContainer" containerID="3aa1f8d7f7132f59d7bfea237501983b37cd4afbb975a841b8bdf260e2266f1b" Feb 01 14:55:20 crc kubenswrapper[4820]: I0201 14:55:20.219488 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" event={"ID":"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee","Type":"ContainerStarted","Data":"7d8fcaf85852b93a5223225f92c30f949f4df0a6a308c432bdf5225f898d504c"} Feb 01 14:55:20 crc kubenswrapper[4820]: I0201 14:55:20.261906 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" podStartSLOduration=1.813605148 podStartE2EDuration="2.261885865s" podCreationTimestamp="2026-02-01 14:55:18 +0000 UTC" firstStartedPulling="2026-02-01 14:55:19.143505618 +0000 UTC m=+2060.663871902" lastFinishedPulling="2026-02-01 14:55:19.591786335 +0000 UTC m=+2061.112152619" observedRunningTime="2026-02-01 14:55:20.259611001 +0000 UTC m=+2061.779977285" watchObservedRunningTime="2026-02-01 14:55:20.261885865 +0000 UTC m=+2061.782252149" Feb 01 14:55:52 crc kubenswrapper[4820]: I0201 14:55:52.518269 4820 generic.go:334] "Generic (PLEG): container finished" podID="b0b37762-2ced-48f3-b7f1-8e2b14cb2fee" containerID="7d8fcaf85852b93a5223225f92c30f949f4df0a6a308c432bdf5225f898d504c" exitCode=0 Feb 01 14:55:52 crc kubenswrapper[4820]: I0201 14:55:52.518378 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" event={"ID":"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee","Type":"ContainerDied","Data":"7d8fcaf85852b93a5223225f92c30f949f4df0a6a308c432bdf5225f898d504c"} Feb 01 14:55:53 crc kubenswrapper[4820]: I0201 14:55:53.981235 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.156679 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ceph\") pod \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.156769 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ssh-key-openstack-edpm-ipam\") pod \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.156969 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49gjl\" (UniqueName: \"kubernetes.io/projected/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-kube-api-access-49gjl\") pod \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.157006 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-inventory\") pod \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\" (UID: \"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee\") " Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.166080 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ceph" (OuterVolumeSpecName: "ceph") pod "b0b37762-2ced-48f3-b7f1-8e2b14cb2fee" (UID: "b0b37762-2ced-48f3-b7f1-8e2b14cb2fee"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.166137 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-kube-api-access-49gjl" (OuterVolumeSpecName: "kube-api-access-49gjl") pod "b0b37762-2ced-48f3-b7f1-8e2b14cb2fee" (UID: "b0b37762-2ced-48f3-b7f1-8e2b14cb2fee"). InnerVolumeSpecName "kube-api-access-49gjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.189791 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-inventory" (OuterVolumeSpecName: "inventory") pod "b0b37762-2ced-48f3-b7f1-8e2b14cb2fee" (UID: "b0b37762-2ced-48f3-b7f1-8e2b14cb2fee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.198851 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b0b37762-2ced-48f3-b7f1-8e2b14cb2fee" (UID: "b0b37762-2ced-48f3-b7f1-8e2b14cb2fee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.258585 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49gjl\" (UniqueName: \"kubernetes.io/projected/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-kube-api-access-49gjl\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.258699 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.258727 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.258739 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0b37762-2ced-48f3-b7f1-8e2b14cb2fee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.537038 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" event={"ID":"b0b37762-2ced-48f3-b7f1-8e2b14cb2fee","Type":"ContainerDied","Data":"201e65c9d164a62b40b7eb9b19b6420e801c6b5e63e26b3c59dc7f193ed57eeb"} Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.537088 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201e65c9d164a62b40b7eb9b19b6420e801c6b5e63e26b3c59dc7f193ed57eeb" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.537109 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f8qrq" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.649716 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2"] Feb 01 14:55:54 crc kubenswrapper[4820]: E0201 14:55:54.650167 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b37762-2ced-48f3-b7f1-8e2b14cb2fee" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.650190 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b37762-2ced-48f3-b7f1-8e2b14cb2fee" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.650385 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b37762-2ced-48f3-b7f1-8e2b14cb2fee" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.651130 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.653156 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.653256 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.653933 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.654396 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.658852 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.666678 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.666759 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8gl\" (UniqueName: \"kubernetes.io/projected/a0b2fa04-9143-40a5-840e-72d5753f954b-kube-api-access-pr8gl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.666785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.666942 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.676786 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2"] Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.769377 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8gl\" (UniqueName: \"kubernetes.io/projected/a0b2fa04-9143-40a5-840e-72d5753f954b-kube-api-access-pr8gl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.769463 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.769527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.769619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.777983 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.794080 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.797432 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.802534 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8gl\" (UniqueName: \"kubernetes.io/projected/a0b2fa04-9143-40a5-840e-72d5753f954b-kube-api-access-pr8gl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:54 crc kubenswrapper[4820]: I0201 14:55:54.966370 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:55:55 crc kubenswrapper[4820]: I0201 14:55:55.525385 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2"] Feb 01 14:55:55 crc kubenswrapper[4820]: I0201 14:55:55.550628 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" event={"ID":"a0b2fa04-9143-40a5-840e-72d5753f954b","Type":"ContainerStarted","Data":"d0d241c0be6cbc21c16bd05771102b1c3672ed890aa36d55df9e1675146d31fd"} Feb 01 14:55:56 crc kubenswrapper[4820]: I0201 14:55:56.565292 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" event={"ID":"a0b2fa04-9143-40a5-840e-72d5753f954b","Type":"ContainerStarted","Data":"c56da68087b6928813f8d6a824919047c054da77603b6aafa68d20fda2fa1d25"} Feb 01 14:55:56 crc kubenswrapper[4820]: I0201 14:55:56.587201 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" podStartSLOduration=2.07394082 podStartE2EDuration="2.587184053s" podCreationTimestamp="2026-02-01 14:55:54 +0000 UTC" firstStartedPulling="2026-02-01 14:55:55.527990667 +0000 UTC m=+2097.048356951" lastFinishedPulling="2026-02-01 14:55:56.0412339 +0000 UTC m=+2097.561600184" observedRunningTime="2026-02-01 14:55:56.584794486 +0000 UTC m=+2098.105160790" watchObservedRunningTime="2026-02-01 14:55:56.587184053 +0000 UTC m=+2098.107550357" Feb 01 14:56:00 crc kubenswrapper[4820]: I0201 14:56:00.597921 4820 generic.go:334] "Generic (PLEG): container finished" podID="a0b2fa04-9143-40a5-840e-72d5753f954b" containerID="c56da68087b6928813f8d6a824919047c054da77603b6aafa68d20fda2fa1d25" exitCode=0 Feb 01 14:56:00 crc kubenswrapper[4820]: I0201 14:56:00.598035 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" event={"ID":"a0b2fa04-9143-40a5-840e-72d5753f954b","Type":"ContainerDied","Data":"c56da68087b6928813f8d6a824919047c054da77603b6aafa68d20fda2fa1d25"} Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.058330 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.237474 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr8gl\" (UniqueName: \"kubernetes.io/projected/a0b2fa04-9143-40a5-840e-72d5753f954b-kube-api-access-pr8gl\") pod \"a0b2fa04-9143-40a5-840e-72d5753f954b\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.237539 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ceph\") pod \"a0b2fa04-9143-40a5-840e-72d5753f954b\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.237557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ssh-key-openstack-edpm-ipam\") pod \"a0b2fa04-9143-40a5-840e-72d5753f954b\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.237610 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-inventory\") pod \"a0b2fa04-9143-40a5-840e-72d5753f954b\" (UID: \"a0b2fa04-9143-40a5-840e-72d5753f954b\") " Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.245394 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b2fa04-9143-40a5-840e-72d5753f954b-kube-api-access-pr8gl" (OuterVolumeSpecName: "kube-api-access-pr8gl") pod "a0b2fa04-9143-40a5-840e-72d5753f954b" (UID: "a0b2fa04-9143-40a5-840e-72d5753f954b"). InnerVolumeSpecName "kube-api-access-pr8gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.247102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ceph" (OuterVolumeSpecName: "ceph") pod "a0b2fa04-9143-40a5-840e-72d5753f954b" (UID: "a0b2fa04-9143-40a5-840e-72d5753f954b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.268106 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a0b2fa04-9143-40a5-840e-72d5753f954b" (UID: "a0b2fa04-9143-40a5-840e-72d5753f954b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.278191 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-inventory" (OuterVolumeSpecName: "inventory") pod "a0b2fa04-9143-40a5-840e-72d5753f954b" (UID: "a0b2fa04-9143-40a5-840e-72d5753f954b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.339735 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.339774 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr8gl\" (UniqueName: \"kubernetes.io/projected/a0b2fa04-9143-40a5-840e-72d5753f954b-kube-api-access-pr8gl\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.339789 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.339802 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0b2fa04-9143-40a5-840e-72d5753f954b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.615253 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" event={"ID":"a0b2fa04-9143-40a5-840e-72d5753f954b","Type":"ContainerDied","Data":"d0d241c0be6cbc21c16bd05771102b1c3672ed890aa36d55df9e1675146d31fd"} Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.615641 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0d241c0be6cbc21c16bd05771102b1c3672ed890aa36d55df9e1675146d31fd" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.615283 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.682523 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf"] Feb 01 14:56:02 crc kubenswrapper[4820]: E0201 14:56:02.682971 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b2fa04-9143-40a5-840e-72d5753f954b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.682992 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b2fa04-9143-40a5-840e-72d5753f954b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.683268 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b2fa04-9143-40a5-840e-72d5753f954b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.684041 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.687530 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.687723 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.687725 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.687893 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.687996 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.690996 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf"] Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.849790 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.849855 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.850045 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lc4j\" (UniqueName: \"kubernetes.io/projected/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-kube-api-access-9lc4j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.850093 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.952117 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lc4j\" (UniqueName: \"kubernetes.io/projected/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-kube-api-access-9lc4j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.952229 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.952389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.952435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.957633 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.961078 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.963765 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:02 crc kubenswrapper[4820]: I0201 14:56:02.975499 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lc4j\" (UniqueName: \"kubernetes.io/projected/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-kube-api-access-9lc4j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:03 crc kubenswrapper[4820]: I0201 14:56:03.039083 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:03 crc kubenswrapper[4820]: I0201 14:56:03.602556 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf"] Feb 01 14:56:03 crc kubenswrapper[4820]: W0201 14:56:03.613497 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1d46ee7_c695_461e_8f2b_8cccfcf2ca1a.slice/crio-6ef3cbd6d43f58c000a0bd26450a6259df6519466a4bc72aef4cd7166efef83a WatchSource:0}: Error finding container 6ef3cbd6d43f58c000a0bd26450a6259df6519466a4bc72aef4cd7166efef83a: Status 404 returned error can't find the container with id 6ef3cbd6d43f58c000a0bd26450a6259df6519466a4bc72aef4cd7166efef83a Feb 01 14:56:03 crc kubenswrapper[4820]: I0201 14:56:03.628175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" event={"ID":"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a","Type":"ContainerStarted","Data":"6ef3cbd6d43f58c000a0bd26450a6259df6519466a4bc72aef4cd7166efef83a"} Feb 01 14:56:04 crc kubenswrapper[4820]: I0201 14:56:04.636662 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" event={"ID":"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a","Type":"ContainerStarted","Data":"dccf5036c9ca516519c95d03cad7fe90251dbad3738deeeca67772fbe9216a30"} Feb 01 14:56:04 crc kubenswrapper[4820]: I0201 14:56:04.658774 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" podStartSLOduration=2.239399207 podStartE2EDuration="2.658757746s" podCreationTimestamp="2026-02-01 14:56:02 +0000 UTC" firstStartedPulling="2026-02-01 14:56:03.616427 +0000 UTC m=+2105.136793314" lastFinishedPulling="2026-02-01 14:56:04.035785569 +0000 UTC m=+2105.556151853" observedRunningTime="2026-02-01 14:56:04.651522594 +0000 UTC m=+2106.171888878" watchObservedRunningTime="2026-02-01 14:56:04.658757746 +0000 UTC m=+2106.179124030" Feb 01 14:56:37 crc kubenswrapper[4820]: I0201 14:56:37.835628 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qzdp"] Feb 01 14:56:37 crc kubenswrapper[4820]: I0201 14:56:37.839099 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:37 crc kubenswrapper[4820]: I0201 14:56:37.869340 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qzdp"] Feb 01 14:56:37 crc kubenswrapper[4820]: I0201 14:56:37.975570 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhdg\" (UniqueName: \"kubernetes.io/projected/700c0a35-18f8-4671-ab20-d83888994997-kube-api-access-kkhdg\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:37 crc kubenswrapper[4820]: I0201 14:56:37.975632 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-catalog-content\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:37 crc kubenswrapper[4820]: I0201 14:56:37.975666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-utilities\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.077551 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-catalog-content\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.077644 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-utilities\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.077817 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkhdg\" (UniqueName: \"kubernetes.io/projected/700c0a35-18f8-4671-ab20-d83888994997-kube-api-access-kkhdg\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.078103 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-catalog-content\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.078424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-utilities\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.097061 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkhdg\" (UniqueName: \"kubernetes.io/projected/700c0a35-18f8-4671-ab20-d83888994997-kube-api-access-kkhdg\") pod \"redhat-operators-5qzdp\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.165314 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.621218 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qzdp"] Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.899481 4820 generic.go:334] "Generic (PLEG): container finished" podID="700c0a35-18f8-4671-ab20-d83888994997" containerID="cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d" exitCode=0 Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.899692 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzdp" event={"ID":"700c0a35-18f8-4671-ab20-d83888994997","Type":"ContainerDied","Data":"cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d"} Feb 01 14:56:38 crc kubenswrapper[4820]: I0201 14:56:38.899797 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzdp" event={"ID":"700c0a35-18f8-4671-ab20-d83888994997","Type":"ContainerStarted","Data":"f8a03cbfb55857949c7ba5f8ec6b8e27845b7cbe281482ce1b896e4782444a5d"} Feb 01 14:56:39 crc kubenswrapper[4820]: I0201 14:56:39.911128 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzdp" event={"ID":"700c0a35-18f8-4671-ab20-d83888994997","Type":"ContainerStarted","Data":"ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8"} Feb 01 14:56:40 crc kubenswrapper[4820]: I0201 14:56:40.923678 4820 generic.go:334] "Generic (PLEG): container finished" podID="700c0a35-18f8-4671-ab20-d83888994997" containerID="ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8" exitCode=0 Feb 01 14:56:40 crc kubenswrapper[4820]: I0201 14:56:40.923757 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzdp" event={"ID":"700c0a35-18f8-4671-ab20-d83888994997","Type":"ContainerDied","Data":"ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8"} Feb 01 14:56:41 crc kubenswrapper[4820]: I0201 14:56:41.932763 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzdp" event={"ID":"700c0a35-18f8-4671-ab20-d83888994997","Type":"ContainerStarted","Data":"f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11"} Feb 01 14:56:41 crc kubenswrapper[4820]: I0201 14:56:41.958895 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qzdp" podStartSLOduration=2.5571760169999997 podStartE2EDuration="4.958879833s" podCreationTimestamp="2026-02-01 14:56:37 +0000 UTC" firstStartedPulling="2026-02-01 14:56:38.902432785 +0000 UTC m=+2140.422799069" lastFinishedPulling="2026-02-01 14:56:41.304136601 +0000 UTC m=+2142.824502885" observedRunningTime="2026-02-01 14:56:41.952961993 +0000 UTC m=+2143.473328277" watchObservedRunningTime="2026-02-01 14:56:41.958879833 +0000 UTC m=+2143.479246117" Feb 01 14:56:44 crc kubenswrapper[4820]: I0201 14:56:44.956462 4820 generic.go:334] "Generic (PLEG): container finished" podID="f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a" containerID="dccf5036c9ca516519c95d03cad7fe90251dbad3738deeeca67772fbe9216a30" exitCode=0 Feb 01 14:56:44 crc kubenswrapper[4820]: I0201 14:56:44.956552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" event={"ID":"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a","Type":"ContainerDied","Data":"dccf5036c9ca516519c95d03cad7fe90251dbad3738deeeca67772fbe9216a30"} Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.465789 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.573189 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ceph\") pod \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.573330 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lc4j\" (UniqueName: \"kubernetes.io/projected/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-kube-api-access-9lc4j\") pod \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.573469 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ssh-key-openstack-edpm-ipam\") pod \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.573687 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-inventory\") pod \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\" (UID: \"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a\") " Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.579206 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-kube-api-access-9lc4j" (OuterVolumeSpecName: "kube-api-access-9lc4j") pod "f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a" (UID: "f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a"). InnerVolumeSpecName "kube-api-access-9lc4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.579999 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ceph" (OuterVolumeSpecName: "ceph") pod "f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a" (UID: "f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.597546 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a" (UID: "f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.599859 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-inventory" (OuterVolumeSpecName: "inventory") pod "f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a" (UID: "f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.677194 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lc4j\" (UniqueName: \"kubernetes.io/projected/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-kube-api-access-9lc4j\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.677233 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.677244 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.677254 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.975739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" event={"ID":"f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a","Type":"ContainerDied","Data":"6ef3cbd6d43f58c000a0bd26450a6259df6519466a4bc72aef4cd7166efef83a"} Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.975799 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ef3cbd6d43f58c000a0bd26450a6259df6519466a4bc72aef4cd7166efef83a" Feb 01 14:56:46 crc kubenswrapper[4820]: I0201 14:56:46.975911 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.085238 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jzj27"] Feb 01 14:56:47 crc kubenswrapper[4820]: E0201 14:56:47.085643 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.085662 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.085855 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.086502 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.089108 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.089365 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.089499 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.092322 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.092352 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.092583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jzj27"] Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.186802 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ceph\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.186857 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.186935 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.186955 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m698\" (UniqueName: \"kubernetes.io/projected/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-kube-api-access-9m698\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.288537 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ceph\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.288609 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.288692 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.288715 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m698\" (UniqueName: \"kubernetes.io/projected/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-kube-api-access-9m698\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.292124 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.292399 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.297773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ceph\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.311912 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m698\" (UniqueName: \"kubernetes.io/projected/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-kube-api-access-9m698\") pod \"ssh-known-hosts-edpm-deployment-jzj27\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.417683 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.951580 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jzj27"] Feb 01 14:56:47 crc kubenswrapper[4820]: I0201 14:56:47.988139 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" event={"ID":"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f","Type":"ContainerStarted","Data":"c4da80b72ee6d54a7e316538c232c4434168b0448bcec530157732a07dcd9df2"} Feb 01 14:56:48 crc kubenswrapper[4820]: I0201 14:56:48.166250 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:48 crc kubenswrapper[4820]: I0201 14:56:48.166589 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:48 crc kubenswrapper[4820]: I0201 14:56:48.215351 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:49 crc kubenswrapper[4820]: I0201 14:56:49.002637 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" event={"ID":"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f","Type":"ContainerStarted","Data":"8cc71a63c5e624c9137d67728b9f14f2f9c2d04bd635b1c0eb70a701f8841dc5"} Feb 01 14:56:49 crc kubenswrapper[4820]: I0201 14:56:49.027532 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" podStartSLOduration=1.585082174 podStartE2EDuration="2.02751775s" podCreationTimestamp="2026-02-01 14:56:47 +0000 UTC" firstStartedPulling="2026-02-01 14:56:47.964168504 +0000 UTC m=+2149.484534788" lastFinishedPulling="2026-02-01 14:56:48.40660408 +0000 UTC m=+2149.926970364" observedRunningTime="2026-02-01 14:56:49.023710199 +0000 UTC m=+2150.544076503" watchObservedRunningTime="2026-02-01 14:56:49.02751775 +0000 UTC m=+2150.547884034" Feb 01 14:56:49 crc kubenswrapper[4820]: I0201 14:56:49.060558 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:49 crc kubenswrapper[4820]: I0201 14:56:49.134345 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qzdp"] Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.040270 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5qzdp" podUID="700c0a35-18f8-4671-ab20-d83888994997" containerName="registry-server" containerID="cri-o://f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11" gracePeriod=2 Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.447392 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.571052 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-catalog-content\") pod \"700c0a35-18f8-4671-ab20-d83888994997\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.571176 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkhdg\" (UniqueName: \"kubernetes.io/projected/700c0a35-18f8-4671-ab20-d83888994997-kube-api-access-kkhdg\") pod \"700c0a35-18f8-4671-ab20-d83888994997\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.571208 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-utilities\") pod \"700c0a35-18f8-4671-ab20-d83888994997\" (UID: \"700c0a35-18f8-4671-ab20-d83888994997\") " Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.572512 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-utilities" (OuterVolumeSpecName: "utilities") pod "700c0a35-18f8-4671-ab20-d83888994997" (UID: "700c0a35-18f8-4671-ab20-d83888994997"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.572891 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.576845 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700c0a35-18f8-4671-ab20-d83888994997-kube-api-access-kkhdg" (OuterVolumeSpecName: "kube-api-access-kkhdg") pod "700c0a35-18f8-4671-ab20-d83888994997" (UID: "700c0a35-18f8-4671-ab20-d83888994997"). InnerVolumeSpecName "kube-api-access-kkhdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.673904 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkhdg\" (UniqueName: \"kubernetes.io/projected/700c0a35-18f8-4671-ab20-d83888994997-kube-api-access-kkhdg\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.707415 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "700c0a35-18f8-4671-ab20-d83888994997" (UID: "700c0a35-18f8-4671-ab20-d83888994997"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:56:51 crc kubenswrapper[4820]: I0201 14:56:51.775345 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700c0a35-18f8-4671-ab20-d83888994997-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.053452 4820 generic.go:334] "Generic (PLEG): container finished" podID="700c0a35-18f8-4671-ab20-d83888994997" containerID="f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11" exitCode=0 Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.053524 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzdp" event={"ID":"700c0a35-18f8-4671-ab20-d83888994997","Type":"ContainerDied","Data":"f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11"} Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.053559 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qzdp" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.053594 4820 scope.go:117] "RemoveContainer" containerID="f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.053576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qzdp" event={"ID":"700c0a35-18f8-4671-ab20-d83888994997","Type":"ContainerDied","Data":"f8a03cbfb55857949c7ba5f8ec6b8e27845b7cbe281482ce1b896e4782444a5d"} Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.086517 4820 scope.go:117] "RemoveContainer" containerID="ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.111057 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qzdp"] Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.130182 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5qzdp"] Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.139295 4820 scope.go:117] "RemoveContainer" containerID="cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.183775 4820 scope.go:117] "RemoveContainer" containerID="f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11" Feb 01 14:56:52 crc kubenswrapper[4820]: E0201 14:56:52.184651 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11\": container with ID starting with f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11 not found: ID does not exist" containerID="f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.184692 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11"} err="failed to get container status \"f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11\": rpc error: code = NotFound desc = could not find container \"f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11\": container with ID starting with f26ed936c929c21a85fd4a9a50bf59b83179a541b3e1fd48d97509b8f4c08e11 not found: ID does not exist" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.184719 4820 scope.go:117] "RemoveContainer" containerID="ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8" Feb 01 14:56:52 crc kubenswrapper[4820]: E0201 14:56:52.185126 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8\": container with ID starting with ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8 not found: ID does not exist" containerID="ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.185164 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8"} err="failed to get container status \"ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8\": rpc error: code = NotFound desc = could not find container \"ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8\": container with ID starting with ed186e2796da77100a3a194f65b6340bd3d2d41a47b581b19ed7f1669c7bdea8 not found: ID does not exist" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.185193 4820 scope.go:117] "RemoveContainer" containerID="cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d" Feb 01 14:56:52 crc kubenswrapper[4820]: E0201 14:56:52.185474 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d\": container with ID starting with cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d not found: ID does not exist" containerID="cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d" Feb 01 14:56:52 crc kubenswrapper[4820]: I0201 14:56:52.185505 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d"} err="failed to get container status \"cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d\": rpc error: code = NotFound desc = could not find container \"cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d\": container with ID starting with cbbfd9adf7efbc9a679a0f0210dfecaf6cc1f5f38555bfc40d1896c207e7b69d not found: ID does not exist" Feb 01 14:56:53 crc kubenswrapper[4820]: I0201 14:56:53.209899 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700c0a35-18f8-4671-ab20-d83888994997" path="/var/lib/kubelet/pods/700c0a35-18f8-4671-ab20-d83888994997/volumes" Feb 01 14:56:58 crc kubenswrapper[4820]: I0201 14:56:58.125479 4820 generic.go:334] "Generic (PLEG): container finished" podID="9d8f9e59-fa16-4f98-84f2-0b66514e6a0f" containerID="8cc71a63c5e624c9137d67728b9f14f2f9c2d04bd635b1c0eb70a701f8841dc5" exitCode=0 Feb 01 14:56:58 crc kubenswrapper[4820]: I0201 14:56:58.125543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" event={"ID":"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f","Type":"ContainerDied","Data":"8cc71a63c5e624c9137d67728b9f14f2f9c2d04bd635b1c0eb70a701f8841dc5"} Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.623869 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.675539 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ceph\") pod \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.675707 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-inventory-0\") pod \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.675767 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ssh-key-openstack-edpm-ipam\") pod \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.675810 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m698\" (UniqueName: \"kubernetes.io/projected/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-kube-api-access-9m698\") pod \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\" (UID: \"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f\") " Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.684553 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-kube-api-access-9m698" (OuterVolumeSpecName: "kube-api-access-9m698") pod "9d8f9e59-fa16-4f98-84f2-0b66514e6a0f" (UID: "9d8f9e59-fa16-4f98-84f2-0b66514e6a0f"). InnerVolumeSpecName "kube-api-access-9m698". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.684564 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ceph" (OuterVolumeSpecName: "ceph") pod "9d8f9e59-fa16-4f98-84f2-0b66514e6a0f" (UID: "9d8f9e59-fa16-4f98-84f2-0b66514e6a0f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.699554 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9d8f9e59-fa16-4f98-84f2-0b66514e6a0f" (UID: "9d8f9e59-fa16-4f98-84f2-0b66514e6a0f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.708386 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d8f9e59-fa16-4f98-84f2-0b66514e6a0f" (UID: "9d8f9e59-fa16-4f98-84f2-0b66514e6a0f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.778420 4820 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.778461 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.778478 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m698\" (UniqueName: \"kubernetes.io/projected/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-kube-api-access-9m698\") on node \"crc\" DevicePath \"\"" Feb 01 14:56:59 crc kubenswrapper[4820]: I0201 14:56:59.778488 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8f9e59-fa16-4f98-84f2-0b66514e6a0f-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.145051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" event={"ID":"9d8f9e59-fa16-4f98-84f2-0b66514e6a0f","Type":"ContainerDied","Data":"c4da80b72ee6d54a7e316538c232c4434168b0448bcec530157732a07dcd9df2"} Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.145112 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4da80b72ee6d54a7e316538c232c4434168b0448bcec530157732a07dcd9df2" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.145122 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jzj27" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.227193 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj"] Feb 01 14:57:00 crc kubenswrapper[4820]: E0201 14:57:00.227654 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8f9e59-fa16-4f98-84f2-0b66514e6a0f" containerName="ssh-known-hosts-edpm-deployment" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.227678 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8f9e59-fa16-4f98-84f2-0b66514e6a0f" containerName="ssh-known-hosts-edpm-deployment" Feb 01 14:57:00 crc kubenswrapper[4820]: E0201 14:57:00.227707 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700c0a35-18f8-4671-ab20-d83888994997" containerName="registry-server" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.227716 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="700c0a35-18f8-4671-ab20-d83888994997" containerName="registry-server" Feb 01 14:57:00 crc kubenswrapper[4820]: E0201 14:57:00.227737 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700c0a35-18f8-4671-ab20-d83888994997" containerName="extract-content" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.227745 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="700c0a35-18f8-4671-ab20-d83888994997" containerName="extract-content" Feb 01 14:57:00 crc kubenswrapper[4820]: E0201 14:57:00.227761 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700c0a35-18f8-4671-ab20-d83888994997" containerName="extract-utilities" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.227769 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="700c0a35-18f8-4671-ab20-d83888994997" containerName="extract-utilities" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.228008 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8f9e59-fa16-4f98-84f2-0b66514e6a0f" containerName="ssh-known-hosts-edpm-deployment" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.228038 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="700c0a35-18f8-4671-ab20-d83888994997" containerName="registry-server" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.228740 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.230684 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.231420 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.232108 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.233456 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.233957 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.235802 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj"] Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.287046 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.287324 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.287536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.287791 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb2g\" (UniqueName: \"kubernetes.io/projected/ffeb1c3b-061a-4f96-95c1-69011e7f7028-kube-api-access-9hb2g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.389597 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.389664 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hb2g\" (UniqueName: \"kubernetes.io/projected/ffeb1c3b-061a-4f96-95c1-69011e7f7028-kube-api-access-9hb2g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.389753 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.389798 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.393734 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.393885 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.394400 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.411187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hb2g\" (UniqueName: \"kubernetes.io/projected/ffeb1c3b-061a-4f96-95c1-69011e7f7028-kube-api-access-9hb2g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4dbwj\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:00 crc kubenswrapper[4820]: I0201 14:57:00.557269 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:01 crc kubenswrapper[4820]: I0201 14:57:01.128266 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj"] Feb 01 14:57:01 crc kubenswrapper[4820]: I0201 14:57:01.163197 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" event={"ID":"ffeb1c3b-061a-4f96-95c1-69011e7f7028","Type":"ContainerStarted","Data":"9af13b8d0fa416b8b481887194a477e83ff787602459f0bfebf2cd4463e849e8"} Feb 01 14:57:02 crc kubenswrapper[4820]: I0201 14:57:02.172250 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" event={"ID":"ffeb1c3b-061a-4f96-95c1-69011e7f7028","Type":"ContainerStarted","Data":"ad2a0d7d13c7c0b2978e9166c7dee03ecbe10506d2c0ff56b72b1b84d9f8edcb"} Feb 01 14:57:02 crc kubenswrapper[4820]: I0201 14:57:02.196550 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" podStartSLOduration=1.765433346 podStartE2EDuration="2.196530622s" podCreationTimestamp="2026-02-01 14:57:00 +0000 UTC" firstStartedPulling="2026-02-01 14:57:01.14974956 +0000 UTC m=+2162.670115884" lastFinishedPulling="2026-02-01 14:57:01.580846846 +0000 UTC m=+2163.101213160" observedRunningTime="2026-02-01 14:57:02.188681886 +0000 UTC m=+2163.709048190" watchObservedRunningTime="2026-02-01 14:57:02.196530622 +0000 UTC m=+2163.716896926" Feb 01 14:57:08 crc kubenswrapper[4820]: I0201 14:57:08.225145 4820 generic.go:334] "Generic (PLEG): container finished" podID="ffeb1c3b-061a-4f96-95c1-69011e7f7028" containerID="ad2a0d7d13c7c0b2978e9166c7dee03ecbe10506d2c0ff56b72b1b84d9f8edcb" exitCode=0 Feb 01 14:57:08 crc kubenswrapper[4820]: I0201 14:57:08.225204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" event={"ID":"ffeb1c3b-061a-4f96-95c1-69011e7f7028","Type":"ContainerDied","Data":"ad2a0d7d13c7c0b2978e9166c7dee03ecbe10506d2c0ff56b72b1b84d9f8edcb"} Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.695376 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.787609 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ssh-key-openstack-edpm-ipam\") pod \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.787647 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hb2g\" (UniqueName: \"kubernetes.io/projected/ffeb1c3b-061a-4f96-95c1-69011e7f7028-kube-api-access-9hb2g\") pod \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.787708 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ceph\") pod \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.787749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-inventory\") pod \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\" (UID: \"ffeb1c3b-061a-4f96-95c1-69011e7f7028\") " Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.793004 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ceph" (OuterVolumeSpecName: "ceph") pod "ffeb1c3b-061a-4f96-95c1-69011e7f7028" (UID: "ffeb1c3b-061a-4f96-95c1-69011e7f7028"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.794318 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffeb1c3b-061a-4f96-95c1-69011e7f7028-kube-api-access-9hb2g" (OuterVolumeSpecName: "kube-api-access-9hb2g") pod "ffeb1c3b-061a-4f96-95c1-69011e7f7028" (UID: "ffeb1c3b-061a-4f96-95c1-69011e7f7028"). InnerVolumeSpecName "kube-api-access-9hb2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.811293 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ffeb1c3b-061a-4f96-95c1-69011e7f7028" (UID: "ffeb1c3b-061a-4f96-95c1-69011e7f7028"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.811817 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-inventory" (OuterVolumeSpecName: "inventory") pod "ffeb1c3b-061a-4f96-95c1-69011e7f7028" (UID: "ffeb1c3b-061a-4f96-95c1-69011e7f7028"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.889640 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.889740 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.889757 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffeb1c3b-061a-4f96-95c1-69011e7f7028-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:09 crc kubenswrapper[4820]: I0201 14:57:09.889771 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hb2g\" (UniqueName: \"kubernetes.io/projected/ffeb1c3b-061a-4f96-95c1-69011e7f7028-kube-api-access-9hb2g\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.244494 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" event={"ID":"ffeb1c3b-061a-4f96-95c1-69011e7f7028","Type":"ContainerDied","Data":"9af13b8d0fa416b8b481887194a477e83ff787602459f0bfebf2cd4463e849e8"} Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.244530 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af13b8d0fa416b8b481887194a477e83ff787602459f0bfebf2cd4463e849e8" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.244800 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4dbwj" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.386628 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b"] Feb 01 14:57:10 crc kubenswrapper[4820]: E0201 14:57:10.387463 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffeb1c3b-061a-4f96-95c1-69011e7f7028" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.387486 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffeb1c3b-061a-4f96-95c1-69011e7f7028" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.387713 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffeb1c3b-061a-4f96-95c1-69011e7f7028" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.388499 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.392866 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.396030 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b"] Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.396898 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.396993 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.397045 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.397072 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6df\" (UniqueName: \"kubernetes.io/projected/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-kube-api-access-gx6df\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.403632 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.403827 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.403990 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.404206 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.498866 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.498950 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6df\" (UniqueName: \"kubernetes.io/projected/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-kube-api-access-gx6df\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.499069 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.499128 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.504194 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.504421 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.504786 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.515668 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6df\" (UniqueName: \"kubernetes.io/projected/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-kube-api-access-gx6df\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:10 crc kubenswrapper[4820]: I0201 14:57:10.722977 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:11 crc kubenswrapper[4820]: I0201 14:57:11.254327 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b"] Feb 01 14:57:12 crc kubenswrapper[4820]: I0201 14:57:12.260154 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" event={"ID":"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425","Type":"ContainerStarted","Data":"379795ef4c56d61dc1cc7f8d4b814dba82d76b46369e8995f616cbeb99352dd6"} Feb 01 14:57:12 crc kubenswrapper[4820]: I0201 14:57:12.260534 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" event={"ID":"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425","Type":"ContainerStarted","Data":"7dbf75960e0c73bce894e53be304213bf4a84c5d45543bb05a898b2be7133830"} Feb 01 14:57:12 crc kubenswrapper[4820]: I0201 14:57:12.281601 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" podStartSLOduration=1.88554362 podStartE2EDuration="2.281579405s" podCreationTimestamp="2026-02-01 14:57:10 +0000 UTC" firstStartedPulling="2026-02-01 14:57:11.255436762 +0000 UTC m=+2172.775803046" lastFinishedPulling="2026-02-01 14:57:11.651472507 +0000 UTC m=+2173.171838831" observedRunningTime="2026-02-01 14:57:12.274715132 +0000 UTC m=+2173.795081436" watchObservedRunningTime="2026-02-01 14:57:12.281579405 +0000 UTC m=+2173.801945699" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.179212 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-94snz"] Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.186502 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.194926 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94snz"] Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.220390 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcvj\" (UniqueName: \"kubernetes.io/projected/93854f85-6619-4538-9010-6ee8c83434e5-kube-api-access-rxcvj\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.220527 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-utilities\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.220996 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-catalog-content\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.323175 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-catalog-content\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.323275 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcvj\" (UniqueName: \"kubernetes.io/projected/93854f85-6619-4538-9010-6ee8c83434e5-kube-api-access-rxcvj\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.323315 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-utilities\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.323658 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-catalog-content\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.323790 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-utilities\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.355702 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcvj\" (UniqueName: \"kubernetes.io/projected/93854f85-6619-4538-9010-6ee8c83434e5-kube-api-access-rxcvj\") pod \"redhat-marketplace-94snz\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.360787 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9zph"] Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.362544 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.374591 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9zph"] Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.424737 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gmj\" (UniqueName: \"kubernetes.io/projected/e409e5d8-2819-4507-9a09-7a298271e1be-kube-api-access-25gmj\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.425160 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-utilities\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.425213 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-catalog-content\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.526725 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-utilities\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.526835 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-catalog-content\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.526954 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25gmj\" (UniqueName: \"kubernetes.io/projected/e409e5d8-2819-4507-9a09-7a298271e1be-kube-api-access-25gmj\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.527061 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.527364 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-utilities\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.527411 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-catalog-content\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.576551 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gmj\" (UniqueName: \"kubernetes.io/projected/e409e5d8-2819-4507-9a09-7a298271e1be-kube-api-access-25gmj\") pod \"certified-operators-s9zph\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.719723 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:16 crc kubenswrapper[4820]: I0201 14:57:16.984675 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94snz"] Feb 01 14:57:17 crc kubenswrapper[4820]: I0201 14:57:17.246635 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9zph"] Feb 01 14:57:17 crc kubenswrapper[4820]: I0201 14:57:17.299658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9zph" event={"ID":"e409e5d8-2819-4507-9a09-7a298271e1be","Type":"ContainerStarted","Data":"29cf192a1a7e0c109b39366ba11b5fbc5a6e51fbf1c2ecd92460fa99f70d4b0d"} Feb 01 14:57:17 crc kubenswrapper[4820]: I0201 14:57:17.302434 4820 generic.go:334] "Generic (PLEG): container finished" podID="93854f85-6619-4538-9010-6ee8c83434e5" containerID="9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418" exitCode=0 Feb 01 14:57:17 crc kubenswrapper[4820]: I0201 14:57:17.302481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94snz" event={"ID":"93854f85-6619-4538-9010-6ee8c83434e5","Type":"ContainerDied","Data":"9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418"} Feb 01 14:57:17 crc kubenswrapper[4820]: I0201 14:57:17.302518 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94snz" event={"ID":"93854f85-6619-4538-9010-6ee8c83434e5","Type":"ContainerStarted","Data":"2933c2f2146d31d283d3e313929c75683e256f8132beefbccd266a402fcaa044"} Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.317428 4820 generic.go:334] "Generic (PLEG): container finished" podID="93854f85-6619-4538-9010-6ee8c83434e5" containerID="18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041" exitCode=0 Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.317472 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94snz" event={"ID":"93854f85-6619-4538-9010-6ee8c83434e5","Type":"ContainerDied","Data":"18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041"} Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.320528 4820 generic.go:334] "Generic (PLEG): container finished" podID="e409e5d8-2819-4507-9a09-7a298271e1be" containerID="410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658" exitCode=0 Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.320575 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9zph" event={"ID":"e409e5d8-2819-4507-9a09-7a298271e1be","Type":"ContainerDied","Data":"410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658"} Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.780009 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bzm6p"] Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.821783 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzm6p"] Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.821913 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.874407 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5k8h\" (UniqueName: \"kubernetes.io/projected/393f7c53-888e-4f1f-85e9-ede8c67180a8-kube-api-access-k5k8h\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.874522 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-utilities\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.874628 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-catalog-content\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.976926 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5k8h\" (UniqueName: \"kubernetes.io/projected/393f7c53-888e-4f1f-85e9-ede8c67180a8-kube-api-access-k5k8h\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.977002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-utilities\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.977038 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-catalog-content\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.977605 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-utilities\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.977622 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-catalog-content\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:18 crc kubenswrapper[4820]: I0201 14:57:18.998009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5k8h\" (UniqueName: \"kubernetes.io/projected/393f7c53-888e-4f1f-85e9-ede8c67180a8-kube-api-access-k5k8h\") pod \"community-operators-bzm6p\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:19 crc kubenswrapper[4820]: I0201 14:57:19.165205 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:19 crc kubenswrapper[4820]: I0201 14:57:19.246031 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:57:19 crc kubenswrapper[4820]: I0201 14:57:19.246085 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:57:19 crc kubenswrapper[4820]: I0201 14:57:19.342177 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94snz" event={"ID":"93854f85-6619-4538-9010-6ee8c83434e5","Type":"ContainerStarted","Data":"b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8"} Feb 01 14:57:19 crc kubenswrapper[4820]: I0201 14:57:19.363178 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-94snz" podStartSLOduration=1.977046556 podStartE2EDuration="3.363161868s" podCreationTimestamp="2026-02-01 14:57:16 +0000 UTC" firstStartedPulling="2026-02-01 14:57:17.304496582 +0000 UTC m=+2178.824862866" lastFinishedPulling="2026-02-01 14:57:18.690611854 +0000 UTC m=+2180.210978178" observedRunningTime="2026-02-01 14:57:19.361783886 +0000 UTC m=+2180.882150160" watchObservedRunningTime="2026-02-01 14:57:19.363161868 +0000 UTC m=+2180.883528152" Feb 01 14:57:19 crc kubenswrapper[4820]: I0201 14:57:19.677182 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzm6p"] Feb 01 14:57:19 crc kubenswrapper[4820]: W0201 14:57:19.678083 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod393f7c53_888e_4f1f_85e9_ede8c67180a8.slice/crio-23a22f20a468059c7aa6c4a806fc684f534d5772c53cc629397764841ffce867 WatchSource:0}: Error finding container 23a22f20a468059c7aa6c4a806fc684f534d5772c53cc629397764841ffce867: Status 404 returned error can't find the container with id 23a22f20a468059c7aa6c4a806fc684f534d5772c53cc629397764841ffce867 Feb 01 14:57:20 crc kubenswrapper[4820]: I0201 14:57:20.361462 4820 generic.go:334] "Generic (PLEG): container finished" podID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerID="f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1" exitCode=0 Feb 01 14:57:20 crc kubenswrapper[4820]: I0201 14:57:20.361521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzm6p" event={"ID":"393f7c53-888e-4f1f-85e9-ede8c67180a8","Type":"ContainerDied","Data":"f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1"} Feb 01 14:57:20 crc kubenswrapper[4820]: I0201 14:57:20.361545 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzm6p" event={"ID":"393f7c53-888e-4f1f-85e9-ede8c67180a8","Type":"ContainerStarted","Data":"23a22f20a468059c7aa6c4a806fc684f534d5772c53cc629397764841ffce867"} Feb 01 14:57:20 crc kubenswrapper[4820]: I0201 14:57:20.366928 4820 generic.go:334] "Generic (PLEG): container finished" podID="e409e5d8-2819-4507-9a09-7a298271e1be" containerID="1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f" exitCode=0 Feb 01 14:57:20 crc kubenswrapper[4820]: I0201 14:57:20.367053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9zph" event={"ID":"e409e5d8-2819-4507-9a09-7a298271e1be","Type":"ContainerDied","Data":"1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f"} Feb 01 14:57:21 crc kubenswrapper[4820]: I0201 14:57:21.375695 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9zph" event={"ID":"e409e5d8-2819-4507-9a09-7a298271e1be","Type":"ContainerStarted","Data":"e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67"} Feb 01 14:57:21 crc kubenswrapper[4820]: I0201 14:57:21.377247 4820 generic.go:334] "Generic (PLEG): container finished" podID="5383bb66-a0ea-4b49-8f0a-0a2d9e71d425" containerID="379795ef4c56d61dc1cc7f8d4b814dba82d76b46369e8995f616cbeb99352dd6" exitCode=0 Feb 01 14:57:21 crc kubenswrapper[4820]: I0201 14:57:21.377295 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" event={"ID":"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425","Type":"ContainerDied","Data":"379795ef4c56d61dc1cc7f8d4b814dba82d76b46369e8995f616cbeb99352dd6"} Feb 01 14:57:21 crc kubenswrapper[4820]: I0201 14:57:21.379075 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzm6p" event={"ID":"393f7c53-888e-4f1f-85e9-ede8c67180a8","Type":"ContainerStarted","Data":"1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949"} Feb 01 14:57:21 crc kubenswrapper[4820]: I0201 14:57:21.396833 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9zph" podStartSLOduration=2.907499878 podStartE2EDuration="5.396814051s" podCreationTimestamp="2026-02-01 14:57:16 +0000 UTC" firstStartedPulling="2026-02-01 14:57:18.322320276 +0000 UTC m=+2179.842686560" lastFinishedPulling="2026-02-01 14:57:20.811634459 +0000 UTC m=+2182.332000733" observedRunningTime="2026-02-01 14:57:21.389961669 +0000 UTC m=+2182.910327953" watchObservedRunningTime="2026-02-01 14:57:21.396814051 +0000 UTC m=+2182.917180325" Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.390084 4820 generic.go:334] "Generic (PLEG): container finished" podID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerID="1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949" exitCode=0 Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.390294 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzm6p" event={"ID":"393f7c53-888e-4f1f-85e9-ede8c67180a8","Type":"ContainerDied","Data":"1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949"} Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.791369 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.940617 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ceph\") pod \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.940903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-inventory\") pod \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.941021 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ssh-key-openstack-edpm-ipam\") pod \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.941099 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx6df\" (UniqueName: \"kubernetes.io/projected/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-kube-api-access-gx6df\") pod \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\" (UID: \"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425\") " Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.949116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ceph" (OuterVolumeSpecName: "ceph") pod "5383bb66-a0ea-4b49-8f0a-0a2d9e71d425" (UID: "5383bb66-a0ea-4b49-8f0a-0a2d9e71d425"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.949146 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-kube-api-access-gx6df" (OuterVolumeSpecName: "kube-api-access-gx6df") pod "5383bb66-a0ea-4b49-8f0a-0a2d9e71d425" (UID: "5383bb66-a0ea-4b49-8f0a-0a2d9e71d425"). InnerVolumeSpecName "kube-api-access-gx6df". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.976282 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-inventory" (OuterVolumeSpecName: "inventory") pod "5383bb66-a0ea-4b49-8f0a-0a2d9e71d425" (UID: "5383bb66-a0ea-4b49-8f0a-0a2d9e71d425"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:22 crc kubenswrapper[4820]: I0201 14:57:22.987651 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5383bb66-a0ea-4b49-8f0a-0a2d9e71d425" (UID: "5383bb66-a0ea-4b49-8f0a-0a2d9e71d425"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.043166 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.043195 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx6df\" (UniqueName: \"kubernetes.io/projected/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-kube-api-access-gx6df\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.043204 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.043213 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5383bb66-a0ea-4b49-8f0a-0a2d9e71d425-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.400633 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" event={"ID":"5383bb66-a0ea-4b49-8f0a-0a2d9e71d425","Type":"ContainerDied","Data":"7dbf75960e0c73bce894e53be304213bf4a84c5d45543bb05a898b2be7133830"} Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.401655 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbf75960e0c73bce894e53be304213bf4a84c5d45543bb05a898b2be7133830" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.400731 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.402834 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzm6p" event={"ID":"393f7c53-888e-4f1f-85e9-ede8c67180a8","Type":"ContainerStarted","Data":"7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab"} Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.420491 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bzm6p" podStartSLOduration=2.956547328 podStartE2EDuration="5.420473558s" podCreationTimestamp="2026-02-01 14:57:18 +0000 UTC" firstStartedPulling="2026-02-01 14:57:20.366589262 +0000 UTC m=+2181.886955546" lastFinishedPulling="2026-02-01 14:57:22.830515472 +0000 UTC m=+2184.350881776" observedRunningTime="2026-02-01 14:57:23.419369692 +0000 UTC m=+2184.939735996" watchObservedRunningTime="2026-02-01 14:57:23.420473558 +0000 UTC m=+2184.940839852" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.577062 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4"] Feb 01 14:57:23 crc kubenswrapper[4820]: E0201 14:57:23.577756 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5383bb66-a0ea-4b49-8f0a-0a2d9e71d425" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.577775 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5383bb66-a0ea-4b49-8f0a-0a2d9e71d425" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.577952 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5383bb66-a0ea-4b49-8f0a-0a2d9e71d425" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.578643 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.584179 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.584680 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.584806 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.585333 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.585630 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.585912 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.586198 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.586348 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.589280 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4"] Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.757715 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.757811 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.757905 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.757957 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758008 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758059 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758127 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758203 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758308 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmltn\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-kube-api-access-bmltn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758366 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.758445 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859716 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859772 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859806 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859836 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859863 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859902 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859952 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859972 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.859994 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.860021 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmltn\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-kube-api-access-bmltn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.860049 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.860088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.865600 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.865839 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.866553 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.866596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.866849 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.867009 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.867387 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.867528 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.868861 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.868971 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.873843 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.878135 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.885382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmltn\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-kube-api-access-bmltn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:23 crc kubenswrapper[4820]: I0201 14:57:23.894208 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:24 crc kubenswrapper[4820]: I0201 14:57:24.460168 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4"] Feb 01 14:57:24 crc kubenswrapper[4820]: W0201 14:57:24.461666 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14aabcc4_ac5c_416f_8887_988e9292625b.slice/crio-4ba5d99f934364cce4d80b9940bdbeff4ae18911309c35a63767968aa77ba0e3 WatchSource:0}: Error finding container 4ba5d99f934364cce4d80b9940bdbeff4ae18911309c35a63767968aa77ba0e3: Status 404 returned error can't find the container with id 4ba5d99f934364cce4d80b9940bdbeff4ae18911309c35a63767968aa77ba0e3 Feb 01 14:57:25 crc kubenswrapper[4820]: I0201 14:57:25.421021 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" event={"ID":"14aabcc4-ac5c-416f-8887-988e9292625b","Type":"ContainerStarted","Data":"039d8567f542f209f07162a22043393957b50821e450c511fe2189d51922fba4"} Feb 01 14:57:25 crc kubenswrapper[4820]: I0201 14:57:25.421798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" event={"ID":"14aabcc4-ac5c-416f-8887-988e9292625b","Type":"ContainerStarted","Data":"4ba5d99f934364cce4d80b9940bdbeff4ae18911309c35a63767968aa77ba0e3"} Feb 01 14:57:25 crc kubenswrapper[4820]: I0201 14:57:25.440852 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" podStartSLOduration=1.947078372 podStartE2EDuration="2.440834446s" podCreationTimestamp="2026-02-01 14:57:23 +0000 UTC" firstStartedPulling="2026-02-01 14:57:24.464041634 +0000 UTC m=+2185.984407928" lastFinishedPulling="2026-02-01 14:57:24.957797708 +0000 UTC m=+2186.478164002" observedRunningTime="2026-02-01 14:57:25.437017666 +0000 UTC m=+2186.957383950" watchObservedRunningTime="2026-02-01 14:57:25.440834446 +0000 UTC m=+2186.961200740" Feb 01 14:57:26 crc kubenswrapper[4820]: I0201 14:57:26.527915 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:26 crc kubenswrapper[4820]: I0201 14:57:26.527970 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:26 crc kubenswrapper[4820]: I0201 14:57:26.606005 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:26 crc kubenswrapper[4820]: I0201 14:57:26.720808 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:26 crc kubenswrapper[4820]: I0201 14:57:26.720938 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:26 crc kubenswrapper[4820]: I0201 14:57:26.817016 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:27 crc kubenswrapper[4820]: I0201 14:57:27.493334 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:27 crc kubenswrapper[4820]: I0201 14:57:27.500268 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:29 crc kubenswrapper[4820]: I0201 14:57:29.154097 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94snz"] Feb 01 14:57:29 crc kubenswrapper[4820]: I0201 14:57:29.165463 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:29 crc kubenswrapper[4820]: I0201 14:57:29.165501 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:29 crc kubenswrapper[4820]: I0201 14:57:29.223266 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:29 crc kubenswrapper[4820]: I0201 14:57:29.455497 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-94snz" podUID="93854f85-6619-4538-9010-6ee8c83434e5" containerName="registry-server" containerID="cri-o://b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8" gracePeriod=2 Feb 01 14:57:29 crc kubenswrapper[4820]: I0201 14:57:29.507770 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:29 crc kubenswrapper[4820]: I0201 14:57:29.905757 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.074583 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-utilities\") pod \"93854f85-6619-4538-9010-6ee8c83434e5\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.074754 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-catalog-content\") pod \"93854f85-6619-4538-9010-6ee8c83434e5\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.075133 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxcvj\" (UniqueName: \"kubernetes.io/projected/93854f85-6619-4538-9010-6ee8c83434e5-kube-api-access-rxcvj\") pod \"93854f85-6619-4538-9010-6ee8c83434e5\" (UID: \"93854f85-6619-4538-9010-6ee8c83434e5\") " Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.075687 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-utilities" (OuterVolumeSpecName: "utilities") pod "93854f85-6619-4538-9010-6ee8c83434e5" (UID: "93854f85-6619-4538-9010-6ee8c83434e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.076019 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.088781 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93854f85-6619-4538-9010-6ee8c83434e5-kube-api-access-rxcvj" (OuterVolumeSpecName: "kube-api-access-rxcvj") pod "93854f85-6619-4538-9010-6ee8c83434e5" (UID: "93854f85-6619-4538-9010-6ee8c83434e5"). InnerVolumeSpecName "kube-api-access-rxcvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.106527 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93854f85-6619-4538-9010-6ee8c83434e5" (UID: "93854f85-6619-4538-9010-6ee8c83434e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.158843 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9zph"] Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.177383 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxcvj\" (UniqueName: \"kubernetes.io/projected/93854f85-6619-4538-9010-6ee8c83434e5-kube-api-access-rxcvj\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.177411 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93854f85-6619-4538-9010-6ee8c83434e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.466920 4820 generic.go:334] "Generic (PLEG): container finished" podID="93854f85-6619-4538-9010-6ee8c83434e5" containerID="b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8" exitCode=0 Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.467008 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94snz" event={"ID":"93854f85-6619-4538-9010-6ee8c83434e5","Type":"ContainerDied","Data":"b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8"} Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.467083 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94snz" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.467113 4820 scope.go:117] "RemoveContainer" containerID="b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.467092 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94snz" event={"ID":"93854f85-6619-4538-9010-6ee8c83434e5","Type":"ContainerDied","Data":"2933c2f2146d31d283d3e313929c75683e256f8132beefbccd266a402fcaa044"} Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.467328 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9zph" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" containerName="registry-server" containerID="cri-o://e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67" gracePeriod=2 Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.490145 4820 scope.go:117] "RemoveContainer" containerID="18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.518134 4820 scope.go:117] "RemoveContainer" containerID="9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.519354 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94snz"] Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.527776 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-94snz"] Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.698265 4820 scope.go:117] "RemoveContainer" containerID="b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8" Feb 01 14:57:30 crc kubenswrapper[4820]: E0201 14:57:30.699322 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8\": container with ID starting with b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8 not found: ID does not exist" containerID="b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.699362 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8"} err="failed to get container status \"b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8\": rpc error: code = NotFound desc = could not find container \"b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8\": container with ID starting with b9357ef6e06a74e7405c823059a458df74f161d87cfadd7dd1bf68ac277f72c8 not found: ID does not exist" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.699386 4820 scope.go:117] "RemoveContainer" containerID="18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041" Feb 01 14:57:30 crc kubenswrapper[4820]: E0201 14:57:30.699741 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041\": container with ID starting with 18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041 not found: ID does not exist" containerID="18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.699775 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041"} err="failed to get container status \"18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041\": rpc error: code = NotFound desc = could not find container \"18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041\": container with ID starting with 18f58d502b0a2f4dbe5bd28a72add2d2c91adddfc71f1e9a65a592ce0f396041 not found: ID does not exist" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.699798 4820 scope.go:117] "RemoveContainer" containerID="9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418" Feb 01 14:57:30 crc kubenswrapper[4820]: E0201 14:57:30.700187 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418\": container with ID starting with 9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418 not found: ID does not exist" containerID="9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.700214 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418"} err="failed to get container status \"9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418\": rpc error: code = NotFound desc = could not find container \"9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418\": container with ID starting with 9faa1a315602e48518967ed748d261a940bc43e80ca7fcc1fc1042aa1c40e418 not found: ID does not exist" Feb 01 14:57:30 crc kubenswrapper[4820]: I0201 14:57:30.963000 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.100192 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-catalog-content\") pod \"e409e5d8-2819-4507-9a09-7a298271e1be\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.100397 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25gmj\" (UniqueName: \"kubernetes.io/projected/e409e5d8-2819-4507-9a09-7a298271e1be-kube-api-access-25gmj\") pod \"e409e5d8-2819-4507-9a09-7a298271e1be\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.100420 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-utilities\") pod \"e409e5d8-2819-4507-9a09-7a298271e1be\" (UID: \"e409e5d8-2819-4507-9a09-7a298271e1be\") " Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.101560 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-utilities" (OuterVolumeSpecName: "utilities") pod "e409e5d8-2819-4507-9a09-7a298271e1be" (UID: "e409e5d8-2819-4507-9a09-7a298271e1be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.111635 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e409e5d8-2819-4507-9a09-7a298271e1be-kube-api-access-25gmj" (OuterVolumeSpecName: "kube-api-access-25gmj") pod "e409e5d8-2819-4507-9a09-7a298271e1be" (UID: "e409e5d8-2819-4507-9a09-7a298271e1be"). InnerVolumeSpecName "kube-api-access-25gmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.169757 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e409e5d8-2819-4507-9a09-7a298271e1be" (UID: "e409e5d8-2819-4507-9a09-7a298271e1be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.202020 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.202069 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25gmj\" (UniqueName: \"kubernetes.io/projected/e409e5d8-2819-4507-9a09-7a298271e1be-kube-api-access-25gmj\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.202089 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e409e5d8-2819-4507-9a09-7a298271e1be-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.211931 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93854f85-6619-4538-9010-6ee8c83434e5" path="/var/lib/kubelet/pods/93854f85-6619-4538-9010-6ee8c83434e5/volumes" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.480575 4820 generic.go:334] "Generic (PLEG): container finished" podID="e409e5d8-2819-4507-9a09-7a298271e1be" containerID="e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67" exitCode=0 Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.481049 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9zph" event={"ID":"e409e5d8-2819-4507-9a09-7a298271e1be","Type":"ContainerDied","Data":"e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67"} Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.481083 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9zph" event={"ID":"e409e5d8-2819-4507-9a09-7a298271e1be","Type":"ContainerDied","Data":"29cf192a1a7e0c109b39366ba11b5fbc5a6e51fbf1c2ecd92460fa99f70d4b0d"} Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.481107 4820 scope.go:117] "RemoveContainer" containerID="e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.481156 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9zph" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.504790 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9zph"] Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.517250 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9zph"] Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.527817 4820 scope.go:117] "RemoveContainer" containerID="1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.559362 4820 scope.go:117] "RemoveContainer" containerID="410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.588510 4820 scope.go:117] "RemoveContainer" containerID="e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67" Feb 01 14:57:31 crc kubenswrapper[4820]: E0201 14:57:31.589370 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67\": container with ID starting with e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67 not found: ID does not exist" containerID="e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.589437 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67"} err="failed to get container status \"e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67\": rpc error: code = NotFound desc = could not find container \"e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67\": container with ID starting with e1ac066ffe3ef9bd56c535a58a7f92c432ba4e76aaf4ead9d9b9348e64b09c67 not found: ID does not exist" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.589489 4820 scope.go:117] "RemoveContainer" containerID="1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f" Feb 01 14:57:31 crc kubenswrapper[4820]: E0201 14:57:31.590042 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f\": container with ID starting with 1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f not found: ID does not exist" containerID="1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.590096 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f"} err="failed to get container status \"1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f\": rpc error: code = NotFound desc = could not find container \"1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f\": container with ID starting with 1a2f65312ca056bf55899bff39267bf1a1ee843df283d124f5bc1ab3f5beef4f not found: ID does not exist" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.590128 4820 scope.go:117] "RemoveContainer" containerID="410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658" Feb 01 14:57:31 crc kubenswrapper[4820]: E0201 14:57:31.590544 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658\": container with ID starting with 410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658 not found: ID does not exist" containerID="410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658" Feb 01 14:57:31 crc kubenswrapper[4820]: I0201 14:57:31.590587 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658"} err="failed to get container status \"410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658\": rpc error: code = NotFound desc = could not find container \"410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658\": container with ID starting with 410fab44103d9c0205605b6968e701676f8102828cc491deeef2b297b17cd658 not found: ID does not exist" Feb 01 14:57:32 crc kubenswrapper[4820]: I0201 14:57:32.554712 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzm6p"] Feb 01 14:57:32 crc kubenswrapper[4820]: I0201 14:57:32.555823 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bzm6p" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerName="registry-server" containerID="cri-o://7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab" gracePeriod=2 Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.007132 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.134645 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-utilities\") pod \"393f7c53-888e-4f1f-85e9-ede8c67180a8\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.134803 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5k8h\" (UniqueName: \"kubernetes.io/projected/393f7c53-888e-4f1f-85e9-ede8c67180a8-kube-api-access-k5k8h\") pod \"393f7c53-888e-4f1f-85e9-ede8c67180a8\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.134903 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-catalog-content\") pod \"393f7c53-888e-4f1f-85e9-ede8c67180a8\" (UID: \"393f7c53-888e-4f1f-85e9-ede8c67180a8\") " Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.135217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-utilities" (OuterVolumeSpecName: "utilities") pod "393f7c53-888e-4f1f-85e9-ede8c67180a8" (UID: "393f7c53-888e-4f1f-85e9-ede8c67180a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.135450 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.140914 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393f7c53-888e-4f1f-85e9-ede8c67180a8-kube-api-access-k5k8h" (OuterVolumeSpecName: "kube-api-access-k5k8h") pod "393f7c53-888e-4f1f-85e9-ede8c67180a8" (UID: "393f7c53-888e-4f1f-85e9-ede8c67180a8"). InnerVolumeSpecName "kube-api-access-k5k8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.191828 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "393f7c53-888e-4f1f-85e9-ede8c67180a8" (UID: "393f7c53-888e-4f1f-85e9-ede8c67180a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.208329 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" path="/var/lib/kubelet/pods/e409e5d8-2819-4507-9a09-7a298271e1be/volumes" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.236843 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5k8h\" (UniqueName: \"kubernetes.io/projected/393f7c53-888e-4f1f-85e9-ede8c67180a8-kube-api-access-k5k8h\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.236886 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393f7c53-888e-4f1f-85e9-ede8c67180a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.510527 4820 generic.go:334] "Generic (PLEG): container finished" podID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerID="7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab" exitCode=0 Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.510594 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzm6p" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.510614 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzm6p" event={"ID":"393f7c53-888e-4f1f-85e9-ede8c67180a8","Type":"ContainerDied","Data":"7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab"} Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.510941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzm6p" event={"ID":"393f7c53-888e-4f1f-85e9-ede8c67180a8","Type":"ContainerDied","Data":"23a22f20a468059c7aa6c4a806fc684f534d5772c53cc629397764841ffce867"} Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.510964 4820 scope.go:117] "RemoveContainer" containerID="7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.539784 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzm6p"] Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.540576 4820 scope.go:117] "RemoveContainer" containerID="1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.549039 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bzm6p"] Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.568325 4820 scope.go:117] "RemoveContainer" containerID="f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.606208 4820 scope.go:117] "RemoveContainer" containerID="7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab" Feb 01 14:57:33 crc kubenswrapper[4820]: E0201 14:57:33.606682 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab\": container with ID starting with 7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab not found: ID does not exist" containerID="7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.606722 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab"} err="failed to get container status \"7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab\": rpc error: code = NotFound desc = could not find container \"7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab\": container with ID starting with 7db8c16d6216008184bc0bd876c524763b9b78968fba4b06291b88e84f189aab not found: ID does not exist" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.606749 4820 scope.go:117] "RemoveContainer" containerID="1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949" Feb 01 14:57:33 crc kubenswrapper[4820]: E0201 14:57:33.607113 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949\": container with ID starting with 1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949 not found: ID does not exist" containerID="1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.607156 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949"} err="failed to get container status \"1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949\": rpc error: code = NotFound desc = could not find container \"1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949\": container with ID starting with 1951e743db6d6b652c10c2d5d3ac0503e004456df5dbaa3386c18dc1472ae949 not found: ID does not exist" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.607182 4820 scope.go:117] "RemoveContainer" containerID="f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1" Feb 01 14:57:33 crc kubenswrapper[4820]: E0201 14:57:33.607430 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1\": container with ID starting with f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1 not found: ID does not exist" containerID="f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1" Feb 01 14:57:33 crc kubenswrapper[4820]: I0201 14:57:33.607457 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1"} err="failed to get container status \"f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1\": rpc error: code = NotFound desc = could not find container \"f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1\": container with ID starting with f9760d292f7fedb931374fdb3a939977c1974d043f8fb8674b91c441af5d73c1 not found: ID does not exist" Feb 01 14:57:35 crc kubenswrapper[4820]: I0201 14:57:35.207506 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" path="/var/lib/kubelet/pods/393f7c53-888e-4f1f-85e9-ede8c67180a8/volumes" Feb 01 14:57:49 crc kubenswrapper[4820]: I0201 14:57:49.242899 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:57:49 crc kubenswrapper[4820]: I0201 14:57:49.243510 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:57:54 crc kubenswrapper[4820]: I0201 14:57:54.726790 4820 generic.go:334] "Generic (PLEG): container finished" podID="14aabcc4-ac5c-416f-8887-988e9292625b" containerID="039d8567f542f209f07162a22043393957b50821e450c511fe2189d51922fba4" exitCode=0 Feb 01 14:57:54 crc kubenswrapper[4820]: I0201 14:57:54.727265 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" event={"ID":"14aabcc4-ac5c-416f-8887-988e9292625b","Type":"ContainerDied","Data":"039d8567f542f209f07162a22043393957b50821e450c511fe2189d51922fba4"} Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.132688 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.209434 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ovn-combined-ca-bundle\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.209553 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ssh-key-openstack-edpm-ipam\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.209668 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.209729 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.209794 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-bootstrap-combined-ca-bundle\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.209835 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmltn\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-kube-api-access-bmltn\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.209869 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-libvirt-combined-ca-bundle\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.209941 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-inventory\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.210025 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-repo-setup-combined-ca-bundle\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.210154 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-nova-combined-ca-bundle\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.210201 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.210264 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-neutron-metadata-combined-ca-bundle\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.210343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ceph\") pod \"14aabcc4-ac5c-416f-8887-988e9292625b\" (UID: \"14aabcc4-ac5c-416f-8887-988e9292625b\") " Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.216600 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.216898 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.217298 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.217308 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.218197 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.219354 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-kube-api-access-bmltn" (OuterVolumeSpecName: "kube-api-access-bmltn") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "kube-api-access-bmltn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.219775 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.219864 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.219910 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.222012 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.222715 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ceph" (OuterVolumeSpecName: "ceph") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.240311 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-inventory" (OuterVolumeSpecName: "inventory") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.246675 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14aabcc4-ac5c-416f-8887-988e9292625b" (UID: "14aabcc4-ac5c-416f-8887-988e9292625b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321398 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321451 4820 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321464 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmltn\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-kube-api-access-bmltn\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321476 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321490 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321501 4820 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321539 4820 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321563 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321575 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321588 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321600 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321613 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14aabcc4-ac5c-416f-8887-988e9292625b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.321628 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/14aabcc4-ac5c-416f-8887-988e9292625b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.745355 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" event={"ID":"14aabcc4-ac5c-416f-8887-988e9292625b","Type":"ContainerDied","Data":"4ba5d99f934364cce4d80b9940bdbeff4ae18911309c35a63767968aa77ba0e3"} Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.745406 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba5d99f934364cce4d80b9940bdbeff4ae18911309c35a63767968aa77ba0e3" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.745404 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.888202 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4"] Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.888981 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" containerName="extract-content" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889004 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" containerName="extract-content" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889029 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerName="extract-content" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889039 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerName="extract-content" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889050 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889058 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889072 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93854f85-6619-4538-9010-6ee8c83434e5" containerName="extract-content" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889081 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="93854f85-6619-4538-9010-6ee8c83434e5" containerName="extract-content" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889097 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93854f85-6619-4538-9010-6ee8c83434e5" containerName="extract-utilities" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889107 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="93854f85-6619-4538-9010-6ee8c83434e5" containerName="extract-utilities" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889129 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889137 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889159 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" containerName="extract-utilities" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889167 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" containerName="extract-utilities" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889181 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14aabcc4-ac5c-416f-8887-988e9292625b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889191 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="14aabcc4-ac5c-416f-8887-988e9292625b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889203 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerName="extract-utilities" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889212 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerName="extract-utilities" Feb 01 14:57:56 crc kubenswrapper[4820]: E0201 14:57:56.889225 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93854f85-6619-4538-9010-6ee8c83434e5" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889233 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="93854f85-6619-4538-9010-6ee8c83434e5" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889440 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="93854f85-6619-4538-9010-6ee8c83434e5" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889457 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e409e5d8-2819-4507-9a09-7a298271e1be" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889470 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="14aabcc4-ac5c-416f-8887-988e9292625b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.889489 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="393f7c53-888e-4f1f-85e9-ede8c67180a8" containerName="registry-server" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.890232 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.896114 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.896227 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.896397 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.896549 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.896838 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:57:56 crc kubenswrapper[4820]: I0201 14:57:56.906462 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4"] Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.034561 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.034889 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb9vw\" (UniqueName: \"kubernetes.io/projected/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-kube-api-access-fb9vw\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.035069 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.035300 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.137500 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.137559 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.137647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb9vw\" (UniqueName: \"kubernetes.io/projected/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-kube-api-access-fb9vw\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.137739 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.143768 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.146162 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.147356 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.165467 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb9vw\" (UniqueName: \"kubernetes.io/projected/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-kube-api-access-fb9vw\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.224347 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.753139 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4"] Feb 01 14:57:57 crc kubenswrapper[4820]: I0201 14:57:57.762533 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 14:57:58 crc kubenswrapper[4820]: I0201 14:57:58.765141 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" event={"ID":"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f","Type":"ContainerStarted","Data":"66bffcabcf9fafe81f86bb3c28f5f2dc77da4514c7527572239fa014b7be706e"} Feb 01 14:57:58 crc kubenswrapper[4820]: I0201 14:57:58.765196 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" event={"ID":"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f","Type":"ContainerStarted","Data":"e20f21afd1952327988fe3230a1e4258d244b9ad61ecb99efe2600f216505238"} Feb 01 14:57:58 crc kubenswrapper[4820]: I0201 14:57:58.789496 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" podStartSLOduration=2.362498582 podStartE2EDuration="2.789476631s" podCreationTimestamp="2026-02-01 14:57:56 +0000 UTC" firstStartedPulling="2026-02-01 14:57:57.762213882 +0000 UTC m=+2219.282580166" lastFinishedPulling="2026-02-01 14:57:58.189191921 +0000 UTC m=+2219.709558215" observedRunningTime="2026-02-01 14:57:58.780221381 +0000 UTC m=+2220.300587695" watchObservedRunningTime="2026-02-01 14:57:58.789476631 +0000 UTC m=+2220.309842925" Feb 01 14:58:03 crc kubenswrapper[4820]: I0201 14:58:03.808185 4820 generic.go:334] "Generic (PLEG): container finished" podID="f1d2dd40-e56b-4ca1-9a71-17fbb175f20f" containerID="66bffcabcf9fafe81f86bb3c28f5f2dc77da4514c7527572239fa014b7be706e" exitCode=0 Feb 01 14:58:03 crc kubenswrapper[4820]: I0201 14:58:03.808337 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" event={"ID":"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f","Type":"ContainerDied","Data":"66bffcabcf9fafe81f86bb3c28f5f2dc77da4514c7527572239fa014b7be706e"} Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.207484 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.298173 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb9vw\" (UniqueName: \"kubernetes.io/projected/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-kube-api-access-fb9vw\") pod \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.298541 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ceph\") pod \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.298631 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-inventory\") pod \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.298710 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ssh-key-openstack-edpm-ipam\") pod \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\" (UID: \"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f\") " Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.306028 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-kube-api-access-fb9vw" (OuterVolumeSpecName: "kube-api-access-fb9vw") pod "f1d2dd40-e56b-4ca1-9a71-17fbb175f20f" (UID: "f1d2dd40-e56b-4ca1-9a71-17fbb175f20f"). InnerVolumeSpecName "kube-api-access-fb9vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.312088 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ceph" (OuterVolumeSpecName: "ceph") pod "f1d2dd40-e56b-4ca1-9a71-17fbb175f20f" (UID: "f1d2dd40-e56b-4ca1-9a71-17fbb175f20f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.323691 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-inventory" (OuterVolumeSpecName: "inventory") pod "f1d2dd40-e56b-4ca1-9a71-17fbb175f20f" (UID: "f1d2dd40-e56b-4ca1-9a71-17fbb175f20f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.340112 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1d2dd40-e56b-4ca1-9a71-17fbb175f20f" (UID: "f1d2dd40-e56b-4ca1-9a71-17fbb175f20f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.400363 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb9vw\" (UniqueName: \"kubernetes.io/projected/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-kube-api-access-fb9vw\") on node \"crc\" DevicePath \"\"" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.400409 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.400420 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.400432 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1d2dd40-e56b-4ca1-9a71-17fbb175f20f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.830462 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" event={"ID":"f1d2dd40-e56b-4ca1-9a71-17fbb175f20f","Type":"ContainerDied","Data":"e20f21afd1952327988fe3230a1e4258d244b9ad61ecb99efe2600f216505238"} Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.830510 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e20f21afd1952327988fe3230a1e4258d244b9ad61ecb99efe2600f216505238" Feb 01 14:58:05 crc kubenswrapper[4820]: I0201 14:58:05.830588 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.004556 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp"] Feb 01 14:58:06 crc kubenswrapper[4820]: E0201 14:58:06.004909 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d2dd40-e56b-4ca1-9a71-17fbb175f20f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.004926 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d2dd40-e56b-4ca1-9a71-17fbb175f20f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.005080 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d2dd40-e56b-4ca1-9a71-17fbb175f20f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.005627 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.007592 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.007827 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.007946 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.008238 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.008791 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.009016 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.026106 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp"] Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.112974 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.113039 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.113063 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.113125 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.113167 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff8p6\" (UniqueName: \"kubernetes.io/projected/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-kube-api-access-ff8p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.113192 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.214538 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff8p6\" (UniqueName: \"kubernetes.io/projected/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-kube-api-access-ff8p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.214582 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.214628 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.214669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.214693 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.214751 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.216320 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.217958 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.219149 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.222702 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.224639 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.235830 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff8p6\" (UniqueName: \"kubernetes.io/projected/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-kube-api-access-ff8p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-k58zp\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.323344 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:58:06 crc kubenswrapper[4820]: I0201 14:58:06.837921 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp"] Feb 01 14:58:07 crc kubenswrapper[4820]: I0201 14:58:07.845882 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" event={"ID":"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34","Type":"ContainerStarted","Data":"0485fe239223e846e7080fe9323d1ea31e6713928ca21ee3bfb36ba88c2325dc"} Feb 01 14:58:07 crc kubenswrapper[4820]: I0201 14:58:07.846375 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" event={"ID":"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34","Type":"ContainerStarted","Data":"fb29bafabd4ad7b1eeb4a3e1838ccc163c6f900676677382bd0a2abd0568912d"} Feb 01 14:58:07 crc kubenswrapper[4820]: I0201 14:58:07.867807 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" podStartSLOduration=2.399484963 podStartE2EDuration="2.867789022s" podCreationTimestamp="2026-02-01 14:58:05 +0000 UTC" firstStartedPulling="2026-02-01 14:58:06.840123873 +0000 UTC m=+2228.360490157" lastFinishedPulling="2026-02-01 14:58:07.308427892 +0000 UTC m=+2228.828794216" observedRunningTime="2026-02-01 14:58:07.861685227 +0000 UTC m=+2229.382051551" watchObservedRunningTime="2026-02-01 14:58:07.867789022 +0000 UTC m=+2229.388155296" Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.242507 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.244049 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.244143 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.245377 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.245459 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" gracePeriod=600 Feb 01 14:58:19 crc kubenswrapper[4820]: E0201 14:58:19.370074 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.951391 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" exitCode=0 Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.951453 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23"} Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.951543 4820 scope.go:117] "RemoveContainer" containerID="e1ac13f57d76a51287898e8d454a95536e8fab51db09445ace007faf6813a62c" Feb 01 14:58:19 crc kubenswrapper[4820]: I0201 14:58:19.952844 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 14:58:19 crc kubenswrapper[4820]: E0201 14:58:19.953479 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:58:31 crc kubenswrapper[4820]: I0201 14:58:31.198919 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 14:58:31 crc kubenswrapper[4820]: E0201 14:58:31.199726 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:58:42 crc kubenswrapper[4820]: I0201 14:58:42.199063 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 14:58:42 crc kubenswrapper[4820]: E0201 14:58:42.200015 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:58:56 crc kubenswrapper[4820]: I0201 14:58:56.206190 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 14:58:56 crc kubenswrapper[4820]: E0201 14:58:56.207133 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:59:10 crc kubenswrapper[4820]: I0201 14:59:10.198907 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 14:59:10 crc kubenswrapper[4820]: E0201 14:59:10.199793 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:59:11 crc kubenswrapper[4820]: I0201 14:59:11.431995 4820 generic.go:334] "Generic (PLEG): container finished" podID="0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" containerID="0485fe239223e846e7080fe9323d1ea31e6713928ca21ee3bfb36ba88c2325dc" exitCode=0 Feb 01 14:59:11 crc kubenswrapper[4820]: I0201 14:59:11.432106 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" event={"ID":"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34","Type":"ContainerDied","Data":"0485fe239223e846e7080fe9323d1ea31e6713928ca21ee3bfb36ba88c2325dc"} Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.908938 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.972423 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ceph\") pod \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.972525 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovn-combined-ca-bundle\") pod \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.972613 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovncontroller-config-0\") pod \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.972649 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ssh-key-openstack-edpm-ipam\") pod \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.972850 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff8p6\" (UniqueName: \"kubernetes.io/projected/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-kube-api-access-ff8p6\") pod \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.973026 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-inventory\") pod \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\" (UID: \"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34\") " Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.981494 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ceph" (OuterVolumeSpecName: "ceph") pod "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" (UID: "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.982312 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-kube-api-access-ff8p6" (OuterVolumeSpecName: "kube-api-access-ff8p6") pod "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" (UID: "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34"). InnerVolumeSpecName "kube-api-access-ff8p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 14:59:12 crc kubenswrapper[4820]: I0201 14:59:12.982624 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" (UID: "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.002714 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-inventory" (OuterVolumeSpecName: "inventory") pod "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" (UID: "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.010539 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" (UID: "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.013236 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" (UID: "0bcf6e2a-cd11-4f34-bb8e-20002b75fb34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.075975 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.076025 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.076045 4820 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.076063 4820 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.076081 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.076100 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff8p6\" (UniqueName: \"kubernetes.io/projected/0bcf6e2a-cd11-4f34-bb8e-20002b75fb34-kube-api-access-ff8p6\") on node \"crc\" DevicePath \"\"" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.461484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" event={"ID":"0bcf6e2a-cd11-4f34-bb8e-20002b75fb34","Type":"ContainerDied","Data":"fb29bafabd4ad7b1eeb4a3e1838ccc163c6f900676677382bd0a2abd0568912d"} Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.461538 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-k58zp" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.461545 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb29bafabd4ad7b1eeb4a3e1838ccc163c6f900676677382bd0a2abd0568912d" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.578566 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb"] Feb 01 14:59:13 crc kubenswrapper[4820]: E0201 14:59:13.579211 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.579248 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.584050 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcf6e2a-cd11-4f34-bb8e-20002b75fb34" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.585047 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.598303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb"] Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.599142 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.599578 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.599815 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.600107 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.600452 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.601474 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.601740 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.691076 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xht5\" (UniqueName: \"kubernetes.io/projected/588885cf-0582-447f-8eca-9580725ecc0e-kube-api-access-2xht5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.691285 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.691374 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.691441 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.691531 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.691577 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.691888 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.793953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xht5\" (UniqueName: \"kubernetes.io/projected/588885cf-0582-447f-8eca-9580725ecc0e-kube-api-access-2xht5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.794047 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.794090 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.794109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.794152 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.794174 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.794237 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.798968 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.799126 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.799167 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.799590 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.806595 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.810474 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.812197 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xht5\" (UniqueName: \"kubernetes.io/projected/588885cf-0582-447f-8eca-9580725ecc0e-kube-api-access-2xht5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:13 crc kubenswrapper[4820]: I0201 14:59:13.950934 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 14:59:14 crc kubenswrapper[4820]: I0201 14:59:14.284705 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb"] Feb 01 14:59:14 crc kubenswrapper[4820]: I0201 14:59:14.482560 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" event={"ID":"588885cf-0582-447f-8eca-9580725ecc0e","Type":"ContainerStarted","Data":"7fc0f821b3a4d30d9bce22827ca58ce6e4c8344cc4add9d9eee28eaaec1f916b"} Feb 01 14:59:15 crc kubenswrapper[4820]: I0201 14:59:15.492420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" event={"ID":"588885cf-0582-447f-8eca-9580725ecc0e","Type":"ContainerStarted","Data":"397d68ad4fd8cfc6138943541cceaf0b60cc2f1b86e84a1c58d94d155157adf7"} Feb 01 14:59:15 crc kubenswrapper[4820]: I0201 14:59:15.516814 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" podStartSLOduration=2.037275683 podStartE2EDuration="2.516797928s" podCreationTimestamp="2026-02-01 14:59:13 +0000 UTC" firstStartedPulling="2026-02-01 14:59:14.28762644 +0000 UTC m=+2295.807992724" lastFinishedPulling="2026-02-01 14:59:14.767148675 +0000 UTC m=+2296.287514969" observedRunningTime="2026-02-01 14:59:15.516252076 +0000 UTC m=+2297.036618400" watchObservedRunningTime="2026-02-01 14:59:15.516797928 +0000 UTC m=+2297.037164222" Feb 01 14:59:23 crc kubenswrapper[4820]: I0201 14:59:23.199343 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 14:59:23 crc kubenswrapper[4820]: E0201 14:59:23.200339 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:59:36 crc kubenswrapper[4820]: I0201 14:59:36.199366 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 14:59:36 crc kubenswrapper[4820]: E0201 14:59:36.200341 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 14:59:50 crc kubenswrapper[4820]: I0201 14:59:50.199564 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 14:59:50 crc kubenswrapper[4820]: E0201 14:59:50.200820 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.165249 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz"] Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.167167 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.169811 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.171422 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.177518 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz"] Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.286920 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjl8\" (UniqueName: \"kubernetes.io/projected/e78a724a-9c65-492a-9c1e-27fb13800cd4-kube-api-access-kvjl8\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.286960 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78a724a-9c65-492a-9c1e-27fb13800cd4-config-volume\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.287067 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78a724a-9c65-492a-9c1e-27fb13800cd4-secret-volume\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.388554 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvjl8\" (UniqueName: \"kubernetes.io/projected/e78a724a-9c65-492a-9c1e-27fb13800cd4-kube-api-access-kvjl8\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.388607 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78a724a-9c65-492a-9c1e-27fb13800cd4-config-volume\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.388734 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78a724a-9c65-492a-9c1e-27fb13800cd4-secret-volume\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.390314 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78a724a-9c65-492a-9c1e-27fb13800cd4-config-volume\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.401419 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78a724a-9c65-492a-9c1e-27fb13800cd4-secret-volume\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.405540 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvjl8\" (UniqueName: \"kubernetes.io/projected/e78a724a-9c65-492a-9c1e-27fb13800cd4-kube-api-access-kvjl8\") pod \"collect-profiles-29499300-k22nz\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.483235 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.724963 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz"] Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.964062 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" event={"ID":"e78a724a-9c65-492a-9c1e-27fb13800cd4","Type":"ContainerStarted","Data":"23fbd27d0899d4f23b8e9d4628307659f6e762fdb93e544ceb31eecb942eee1e"} Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.964116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" event={"ID":"e78a724a-9c65-492a-9c1e-27fb13800cd4","Type":"ContainerStarted","Data":"b3d7cfb8488472254d6ba8384c262a638f436e39e2a4279a9996baa8c56075ca"} Feb 01 15:00:00 crc kubenswrapper[4820]: I0201 15:00:00.989916 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" podStartSLOduration=0.989851775 podStartE2EDuration="989.851775ms" podCreationTimestamp="2026-02-01 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:00:00.981987229 +0000 UTC m=+2342.502353533" watchObservedRunningTime="2026-02-01 15:00:00.989851775 +0000 UTC m=+2342.510218059" Feb 01 15:00:01 crc kubenswrapper[4820]: I0201 15:00:01.978704 4820 generic.go:334] "Generic (PLEG): container finished" podID="e78a724a-9c65-492a-9c1e-27fb13800cd4" containerID="23fbd27d0899d4f23b8e9d4628307659f6e762fdb93e544ceb31eecb942eee1e" exitCode=0 Feb 01 15:00:01 crc kubenswrapper[4820]: I0201 15:00:01.978782 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" event={"ID":"e78a724a-9c65-492a-9c1e-27fb13800cd4","Type":"ContainerDied","Data":"23fbd27d0899d4f23b8e9d4628307659f6e762fdb93e544ceb31eecb942eee1e"} Feb 01 15:00:02 crc kubenswrapper[4820]: I0201 15:00:02.198732 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:00:02 crc kubenswrapper[4820]: E0201 15:00:02.199160 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.412809 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.552991 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78a724a-9c65-492a-9c1e-27fb13800cd4-secret-volume\") pod \"e78a724a-9c65-492a-9c1e-27fb13800cd4\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.553112 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvjl8\" (UniqueName: \"kubernetes.io/projected/e78a724a-9c65-492a-9c1e-27fb13800cd4-kube-api-access-kvjl8\") pod \"e78a724a-9c65-492a-9c1e-27fb13800cd4\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.553181 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78a724a-9c65-492a-9c1e-27fb13800cd4-config-volume\") pod \"e78a724a-9c65-492a-9c1e-27fb13800cd4\" (UID: \"e78a724a-9c65-492a-9c1e-27fb13800cd4\") " Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.554380 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78a724a-9c65-492a-9c1e-27fb13800cd4-config-volume" (OuterVolumeSpecName: "config-volume") pod "e78a724a-9c65-492a-9c1e-27fb13800cd4" (UID: "e78a724a-9c65-492a-9c1e-27fb13800cd4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.559507 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78a724a-9c65-492a-9c1e-27fb13800cd4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e78a724a-9c65-492a-9c1e-27fb13800cd4" (UID: "e78a724a-9c65-492a-9c1e-27fb13800cd4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.561096 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78a724a-9c65-492a-9c1e-27fb13800cd4-kube-api-access-kvjl8" (OuterVolumeSpecName: "kube-api-access-kvjl8") pod "e78a724a-9c65-492a-9c1e-27fb13800cd4" (UID: "e78a724a-9c65-492a-9c1e-27fb13800cd4"). InnerVolumeSpecName "kube-api-access-kvjl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.655973 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e78a724a-9c65-492a-9c1e-27fb13800cd4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.656029 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvjl8\" (UniqueName: \"kubernetes.io/projected/e78a724a-9c65-492a-9c1e-27fb13800cd4-kube-api-access-kvjl8\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:03 crc kubenswrapper[4820]: I0201 15:00:03.656048 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e78a724a-9c65-492a-9c1e-27fb13800cd4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:04 crc kubenswrapper[4820]: I0201 15:00:04.013304 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" event={"ID":"e78a724a-9c65-492a-9c1e-27fb13800cd4","Type":"ContainerDied","Data":"b3d7cfb8488472254d6ba8384c262a638f436e39e2a4279a9996baa8c56075ca"} Feb 01 15:00:04 crc kubenswrapper[4820]: I0201 15:00:04.013371 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d7cfb8488472254d6ba8384c262a638f436e39e2a4279a9996baa8c56075ca" Feb 01 15:00:04 crc kubenswrapper[4820]: I0201 15:00:04.013393 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499300-k22nz" Feb 01 15:00:04 crc kubenswrapper[4820]: I0201 15:00:04.533304 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj"] Feb 01 15:00:04 crc kubenswrapper[4820]: I0201 15:00:04.561575 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499255-6plmj"] Feb 01 15:00:05 crc kubenswrapper[4820]: I0201 15:00:05.211246 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6740627-6bd7-48f8-9dd8-ceccce34fc7f" path="/var/lib/kubelet/pods/a6740627-6bd7-48f8-9dd8-ceccce34fc7f/volumes" Feb 01 15:00:08 crc kubenswrapper[4820]: I0201 15:00:08.115324 4820 scope.go:117] "RemoveContainer" containerID="e5ffadf679e3748ef24b06383df54de342b0be63fba86a3927aebe26fd368d18" Feb 01 15:00:10 crc kubenswrapper[4820]: I0201 15:00:10.080383 4820 generic.go:334] "Generic (PLEG): container finished" podID="588885cf-0582-447f-8eca-9580725ecc0e" containerID="397d68ad4fd8cfc6138943541cceaf0b60cc2f1b86e84a1c58d94d155157adf7" exitCode=0 Feb 01 15:00:10 crc kubenswrapper[4820]: I0201 15:00:10.080481 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" event={"ID":"588885cf-0582-447f-8eca-9580725ecc0e","Type":"ContainerDied","Data":"397d68ad4fd8cfc6138943541cceaf0b60cc2f1b86e84a1c58d94d155157adf7"} Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.567057 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.729840 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ceph\") pod \"588885cf-0582-447f-8eca-9580725ecc0e\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.729965 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-nova-metadata-neutron-config-0\") pod \"588885cf-0582-447f-8eca-9580725ecc0e\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.730024 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ssh-key-openstack-edpm-ipam\") pod \"588885cf-0582-447f-8eca-9580725ecc0e\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.730080 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xht5\" (UniqueName: \"kubernetes.io/projected/588885cf-0582-447f-8eca-9580725ecc0e-kube-api-access-2xht5\") pod \"588885cf-0582-447f-8eca-9580725ecc0e\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.730198 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-inventory\") pod \"588885cf-0582-447f-8eca-9580725ecc0e\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.730868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"588885cf-0582-447f-8eca-9580725ecc0e\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.730915 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-metadata-combined-ca-bundle\") pod \"588885cf-0582-447f-8eca-9580725ecc0e\" (UID: \"588885cf-0582-447f-8eca-9580725ecc0e\") " Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.737727 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ceph" (OuterVolumeSpecName: "ceph") pod "588885cf-0582-447f-8eca-9580725ecc0e" (UID: "588885cf-0582-447f-8eca-9580725ecc0e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.738177 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "588885cf-0582-447f-8eca-9580725ecc0e" (UID: "588885cf-0582-447f-8eca-9580725ecc0e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.738591 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588885cf-0582-447f-8eca-9580725ecc0e-kube-api-access-2xht5" (OuterVolumeSpecName: "kube-api-access-2xht5") pod "588885cf-0582-447f-8eca-9580725ecc0e" (UID: "588885cf-0582-447f-8eca-9580725ecc0e"). InnerVolumeSpecName "kube-api-access-2xht5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.767339 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-inventory" (OuterVolumeSpecName: "inventory") pod "588885cf-0582-447f-8eca-9580725ecc0e" (UID: "588885cf-0582-447f-8eca-9580725ecc0e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.771690 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "588885cf-0582-447f-8eca-9580725ecc0e" (UID: "588885cf-0582-447f-8eca-9580725ecc0e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.789187 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "588885cf-0582-447f-8eca-9580725ecc0e" (UID: "588885cf-0582-447f-8eca-9580725ecc0e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.789295 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "588885cf-0582-447f-8eca-9580725ecc0e" (UID: "588885cf-0582-447f-8eca-9580725ecc0e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.832952 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.832991 4820 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.833010 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.833025 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xht5\" (UniqueName: \"kubernetes.io/projected/588885cf-0582-447f-8eca-9580725ecc0e-kube-api-access-2xht5\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.833037 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.833052 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:11 crc kubenswrapper[4820]: I0201 15:00:11.833079 4820 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588885cf-0582-447f-8eca-9580725ecc0e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.105391 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" event={"ID":"588885cf-0582-447f-8eca-9580725ecc0e","Type":"ContainerDied","Data":"7fc0f821b3a4d30d9bce22827ca58ce6e4c8344cc4add9d9eee28eaaec1f916b"} Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.105463 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc0f821b3a4d30d9bce22827ca58ce6e4c8344cc4add9d9eee28eaaec1f916b" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.105550 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.241329 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg"] Feb 01 15:00:12 crc kubenswrapper[4820]: E0201 15:00:12.241686 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588885cf-0582-447f-8eca-9580725ecc0e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.241704 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="588885cf-0582-447f-8eca-9580725ecc0e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 01 15:00:12 crc kubenswrapper[4820]: E0201 15:00:12.241718 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78a724a-9c65-492a-9c1e-27fb13800cd4" containerName="collect-profiles" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.241724 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78a724a-9c65-492a-9c1e-27fb13800cd4" containerName="collect-profiles" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.241981 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78a724a-9c65-492a-9c1e-27fb13800cd4" containerName="collect-profiles" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.242007 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="588885cf-0582-447f-8eca-9580725ecc0e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.242580 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.247495 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.247555 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.247981 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.248164 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.248355 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.248515 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.254780 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg"] Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.343038 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.343453 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.343506 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.343691 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7jjg\" (UniqueName: \"kubernetes.io/projected/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-kube-api-access-z7jjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.343718 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.343734 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.445562 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.445616 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.445654 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.445760 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7jjg\" (UniqueName: \"kubernetes.io/projected/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-kube-api-access-z7jjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.445788 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.445813 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.452923 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.453671 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.454941 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.454960 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.455629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.472052 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7jjg\" (UniqueName: \"kubernetes.io/projected/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-kube-api-access-z7jjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:12 crc kubenswrapper[4820]: I0201 15:00:12.557201 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:00:13 crc kubenswrapper[4820]: I0201 15:00:13.131496 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg"] Feb 01 15:00:14 crc kubenswrapper[4820]: I0201 15:00:14.131489 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" event={"ID":"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128","Type":"ContainerStarted","Data":"11887ecc704068f36f766a24a6ba2b22996248141d9120c1391f2ae5acca9bd6"} Feb 01 15:00:15 crc kubenswrapper[4820]: I0201 15:00:15.142181 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" event={"ID":"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128","Type":"ContainerStarted","Data":"308e0beb76f9f328f976ddf1f7c3cf3c75d348c92ffa8879bae1b163cb6a1508"} Feb 01 15:00:15 crc kubenswrapper[4820]: I0201 15:00:15.183592 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" podStartSLOduration=2.466146066 podStartE2EDuration="3.183562716s" podCreationTimestamp="2026-02-01 15:00:12 +0000 UTC" firstStartedPulling="2026-02-01 15:00:13.150294642 +0000 UTC m=+2354.670660926" lastFinishedPulling="2026-02-01 15:00:13.867711282 +0000 UTC m=+2355.388077576" observedRunningTime="2026-02-01 15:00:15.176482788 +0000 UTC m=+2356.696849102" watchObservedRunningTime="2026-02-01 15:00:15.183562716 +0000 UTC m=+2356.703929040" Feb 01 15:00:16 crc kubenswrapper[4820]: I0201 15:00:16.198594 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:00:16 crc kubenswrapper[4820]: E0201 15:00:16.198914 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:00:27 crc kubenswrapper[4820]: I0201 15:00:27.199271 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:00:27 crc kubenswrapper[4820]: E0201 15:00:27.200321 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:00:41 crc kubenswrapper[4820]: I0201 15:00:41.199130 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:00:41 crc kubenswrapper[4820]: E0201 15:00:41.200103 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:00:55 crc kubenswrapper[4820]: I0201 15:00:55.201106 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:00:55 crc kubenswrapper[4820]: E0201 15:00:55.202721 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.187921 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29499301-92jj6"] Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.191292 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.201963 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29499301-92jj6"] Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.328129 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-config-data\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.328164 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-fernet-keys\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.328278 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdgd\" (UniqueName: \"kubernetes.io/projected/acc829d8-a18c-48cc-8b6d-4516a64c1de9-kube-api-access-zgdgd\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.328319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-combined-ca-bundle\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.431100 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgdgd\" (UniqueName: \"kubernetes.io/projected/acc829d8-a18c-48cc-8b6d-4516a64c1de9-kube-api-access-zgdgd\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.431216 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-combined-ca-bundle\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.431381 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-config-data\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.431419 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-fernet-keys\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.440906 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-config-data\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.443179 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-combined-ca-bundle\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.449801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-fernet-keys\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.459363 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgdgd\" (UniqueName: \"kubernetes.io/projected/acc829d8-a18c-48cc-8b6d-4516a64c1de9-kube-api-access-zgdgd\") pod \"keystone-cron-29499301-92jj6\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.542658 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:00 crc kubenswrapper[4820]: I0201 15:01:00.818393 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29499301-92jj6"] Feb 01 15:01:01 crc kubenswrapper[4820]: I0201 15:01:01.725784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29499301-92jj6" event={"ID":"acc829d8-a18c-48cc-8b6d-4516a64c1de9","Type":"ContainerStarted","Data":"050982e7bfb29b3d3d8f320ab5377807757108f3012b47a070c68b483df31159"} Feb 01 15:01:01 crc kubenswrapper[4820]: I0201 15:01:01.726139 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29499301-92jj6" event={"ID":"acc829d8-a18c-48cc-8b6d-4516a64c1de9","Type":"ContainerStarted","Data":"8a14a391eca88a56266dbd08552dd342f3f0af43692c9cec32bac5c3509b5845"} Feb 01 15:01:01 crc kubenswrapper[4820]: I0201 15:01:01.767578 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29499301-92jj6" podStartSLOduration=1.7675513440000001 podStartE2EDuration="1.767551344s" podCreationTimestamp="2026-02-01 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:01:01.749521361 +0000 UTC m=+2403.269887695" watchObservedRunningTime="2026-02-01 15:01:01.767551344 +0000 UTC m=+2403.287917638" Feb 01 15:01:03 crc kubenswrapper[4820]: I0201 15:01:03.747832 4820 generic.go:334] "Generic (PLEG): container finished" podID="acc829d8-a18c-48cc-8b6d-4516a64c1de9" containerID="050982e7bfb29b3d3d8f320ab5377807757108f3012b47a070c68b483df31159" exitCode=0 Feb 01 15:01:03 crc kubenswrapper[4820]: I0201 15:01:03.747927 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29499301-92jj6" event={"ID":"acc829d8-a18c-48cc-8b6d-4516a64c1de9","Type":"ContainerDied","Data":"050982e7bfb29b3d3d8f320ab5377807757108f3012b47a070c68b483df31159"} Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.189017 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.342669 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-config-data\") pod \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.343062 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdgd\" (UniqueName: \"kubernetes.io/projected/acc829d8-a18c-48cc-8b6d-4516a64c1de9-kube-api-access-zgdgd\") pod \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.343179 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-combined-ca-bundle\") pod \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.343557 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-fernet-keys\") pod \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\" (UID: \"acc829d8-a18c-48cc-8b6d-4516a64c1de9\") " Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.350143 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc829d8-a18c-48cc-8b6d-4516a64c1de9-kube-api-access-zgdgd" (OuterVolumeSpecName: "kube-api-access-zgdgd") pod "acc829d8-a18c-48cc-8b6d-4516a64c1de9" (UID: "acc829d8-a18c-48cc-8b6d-4516a64c1de9"). InnerVolumeSpecName "kube-api-access-zgdgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.350302 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "acc829d8-a18c-48cc-8b6d-4516a64c1de9" (UID: "acc829d8-a18c-48cc-8b6d-4516a64c1de9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.376466 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acc829d8-a18c-48cc-8b6d-4516a64c1de9" (UID: "acc829d8-a18c-48cc-8b6d-4516a64c1de9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.404631 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-config-data" (OuterVolumeSpecName: "config-data") pod "acc829d8-a18c-48cc-8b6d-4516a64c1de9" (UID: "acc829d8-a18c-48cc-8b6d-4516a64c1de9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.446276 4820 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.446330 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.446353 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdgd\" (UniqueName: \"kubernetes.io/projected/acc829d8-a18c-48cc-8b6d-4516a64c1de9-kube-api-access-zgdgd\") on node \"crc\" DevicePath \"\"" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.446373 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc829d8-a18c-48cc-8b6d-4516a64c1de9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.778806 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29499301-92jj6" event={"ID":"acc829d8-a18c-48cc-8b6d-4516a64c1de9","Type":"ContainerDied","Data":"8a14a391eca88a56266dbd08552dd342f3f0af43692c9cec32bac5c3509b5845"} Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.778901 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a14a391eca88a56266dbd08552dd342f3f0af43692c9cec32bac5c3509b5845" Feb 01 15:01:05 crc kubenswrapper[4820]: I0201 15:01:05.778938 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29499301-92jj6" Feb 01 15:01:10 crc kubenswrapper[4820]: I0201 15:01:10.199355 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:01:10 crc kubenswrapper[4820]: E0201 15:01:10.200229 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:01:25 crc kubenswrapper[4820]: I0201 15:01:25.200097 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:01:25 crc kubenswrapper[4820]: E0201 15:01:25.201103 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:01:36 crc kubenswrapper[4820]: I0201 15:01:36.199303 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:01:36 crc kubenswrapper[4820]: E0201 15:01:36.200203 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:01:50 crc kubenswrapper[4820]: I0201 15:01:50.199861 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:01:50 crc kubenswrapper[4820]: E0201 15:01:50.201161 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:02:01 crc kubenswrapper[4820]: I0201 15:02:01.199936 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:02:01 crc kubenswrapper[4820]: E0201 15:02:01.201358 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:02:15 crc kubenswrapper[4820]: I0201 15:02:15.199700 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:02:15 crc kubenswrapper[4820]: E0201 15:02:15.201320 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:02:26 crc kubenswrapper[4820]: I0201 15:02:26.199332 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:02:26 crc kubenswrapper[4820]: E0201 15:02:26.200616 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:02:39 crc kubenswrapper[4820]: I0201 15:02:39.212340 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:02:39 crc kubenswrapper[4820]: E0201 15:02:39.213495 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:02:53 crc kubenswrapper[4820]: I0201 15:02:53.198773 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:02:53 crc kubenswrapper[4820]: E0201 15:02:53.199787 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:03:05 crc kubenswrapper[4820]: I0201 15:03:05.199561 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:03:05 crc kubenswrapper[4820]: E0201 15:03:05.200810 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:03:16 crc kubenswrapper[4820]: I0201 15:03:16.199180 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:03:16 crc kubenswrapper[4820]: E0201 15:03:16.200596 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:03:27 crc kubenswrapper[4820]: I0201 15:03:27.199015 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:03:27 crc kubenswrapper[4820]: I0201 15:03:27.526941 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"0d61213b4a10a4ab74b8c67738ccbaf8ce69c525fdf88a8f53d56aa59cdd82b9"} Feb 01 15:04:14 crc kubenswrapper[4820]: I0201 15:04:14.083281 4820 generic.go:334] "Generic (PLEG): container finished" podID="c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" containerID="308e0beb76f9f328f976ddf1f7c3cf3c75d348c92ffa8879bae1b163cb6a1508" exitCode=0 Feb 01 15:04:14 crc kubenswrapper[4820]: I0201 15:04:14.083434 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" event={"ID":"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128","Type":"ContainerDied","Data":"308e0beb76f9f328f976ddf1f7c3cf3c75d348c92ffa8879bae1b163cb6a1508"} Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.574175 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.690045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ssh-key-openstack-edpm-ipam\") pod \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.690136 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-inventory\") pod \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.690255 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-combined-ca-bundle\") pod \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.690343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ceph\") pod \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.690379 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-secret-0\") pod \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.690484 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7jjg\" (UniqueName: \"kubernetes.io/projected/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-kube-api-access-z7jjg\") pod \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\" (UID: \"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128\") " Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.699271 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-kube-api-access-z7jjg" (OuterVolumeSpecName: "kube-api-access-z7jjg") pod "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" (UID: "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128"). InnerVolumeSpecName "kube-api-access-z7jjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.699732 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" (UID: "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.704757 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ceph" (OuterVolumeSpecName: "ceph") pod "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" (UID: "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.722610 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-inventory" (OuterVolumeSpecName: "inventory") pod "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" (UID: "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.727112 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" (UID: "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.744028 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" (UID: "c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.793671 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.793714 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.793732 4820 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.793745 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7jjg\" (UniqueName: \"kubernetes.io/projected/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-kube-api-access-z7jjg\") on node \"crc\" DevicePath \"\"" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.793758 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 15:04:15 crc kubenswrapper[4820]: I0201 15:04:15.793771 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.106051 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" event={"ID":"c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128","Type":"ContainerDied","Data":"11887ecc704068f36f766a24a6ba2b22996248141d9120c1391f2ae5acca9bd6"} Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.106099 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11887ecc704068f36f766a24a6ba2b22996248141d9120c1391f2ae5acca9bd6" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.106183 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.216103 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2"] Feb 01 15:04:16 crc kubenswrapper[4820]: E0201 15:04:16.216963 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.216982 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 01 15:04:16 crc kubenswrapper[4820]: E0201 15:04:16.217008 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc829d8-a18c-48cc-8b6d-4516a64c1de9" containerName="keystone-cron" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.217035 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc829d8-a18c-48cc-8b6d-4516a64c1de9" containerName="keystone-cron" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.217203 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.217285 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc829d8-a18c-48cc-8b6d-4516a64c1de9" containerName="keystone-cron" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.219436 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.223020 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.223244 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.223413 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.223458 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.223517 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.223713 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.223809 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.224041 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kbqzw" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.226317 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.232430 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2"] Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.405819 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbx6\" (UniqueName: \"kubernetes.io/projected/1573e690-1a23-4563-806d-8023f7d44c43-kube-api-access-8zbx6\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.405869 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.405909 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.407332 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.407522 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.407707 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.407779 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.407865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.407975 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.408028 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.408150 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.509435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.509619 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zbx6\" (UniqueName: \"kubernetes.io/projected/1573e690-1a23-4563-806d-8023f7d44c43-kube-api-access-8zbx6\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.509669 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.509709 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.509749 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.509820 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.509916 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.509953 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.510019 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.510076 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.510125 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.511594 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.511695 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.515020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.515096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.516102 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.516195 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.516201 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.516286 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.516712 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.522493 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.528187 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zbx6\" (UniqueName: \"kubernetes.io/projected/1573e690-1a23-4563-806d-8023f7d44c43-kube-api-access-8zbx6\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:16 crc kubenswrapper[4820]: I0201 15:04:16.549576 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:04:17 crc kubenswrapper[4820]: I0201 15:04:17.098616 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2"] Feb 01 15:04:17 crc kubenswrapper[4820]: W0201 15:04:17.102925 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1573e690_1a23_4563_806d_8023f7d44c43.slice/crio-625f6c62092f4416de489ce3b8568bd7dc99b3f59f536c5fb2ebe39a1701fc53 WatchSource:0}: Error finding container 625f6c62092f4416de489ce3b8568bd7dc99b3f59f536c5fb2ebe39a1701fc53: Status 404 returned error can't find the container with id 625f6c62092f4416de489ce3b8568bd7dc99b3f59f536c5fb2ebe39a1701fc53 Feb 01 15:04:17 crc kubenswrapper[4820]: I0201 15:04:17.106238 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 15:04:17 crc kubenswrapper[4820]: I0201 15:04:17.120640 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" event={"ID":"1573e690-1a23-4563-806d-8023f7d44c43","Type":"ContainerStarted","Data":"625f6c62092f4416de489ce3b8568bd7dc99b3f59f536c5fb2ebe39a1701fc53"} Feb 01 15:04:18 crc kubenswrapper[4820]: I0201 15:04:18.133034 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" event={"ID":"1573e690-1a23-4563-806d-8023f7d44c43","Type":"ContainerStarted","Data":"89c807fe94e2d65fbf130545b13085260dc17b2b0a2a6431552dcac2e6dda0b7"} Feb 01 15:04:18 crc kubenswrapper[4820]: I0201 15:04:18.174798 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" podStartSLOduration=1.734365554 podStartE2EDuration="2.174747855s" podCreationTimestamp="2026-02-01 15:04:16 +0000 UTC" firstStartedPulling="2026-02-01 15:04:17.105926941 +0000 UTC m=+2598.626293225" lastFinishedPulling="2026-02-01 15:04:17.546309242 +0000 UTC m=+2599.066675526" observedRunningTime="2026-02-01 15:04:18.156559809 +0000 UTC m=+2599.676926123" watchObservedRunningTime="2026-02-01 15:04:18.174747855 +0000 UTC m=+2599.695114169" Feb 01 15:05:49 crc kubenswrapper[4820]: I0201 15:05:49.242196 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:05:49 crc kubenswrapper[4820]: I0201 15:05:49.243059 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:06:19 crc kubenswrapper[4820]: I0201 15:06:19.243387 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:06:19 crc kubenswrapper[4820]: I0201 15:06:19.244428 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:06:40 crc kubenswrapper[4820]: I0201 15:06:40.642385 4820 generic.go:334] "Generic (PLEG): container finished" podID="1573e690-1a23-4563-806d-8023f7d44c43" containerID="89c807fe94e2d65fbf130545b13085260dc17b2b0a2a6431552dcac2e6dda0b7" exitCode=0 Feb 01 15:06:40 crc kubenswrapper[4820]: I0201 15:06:40.642492 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" event={"ID":"1573e690-1a23-4563-806d-8023f7d44c43","Type":"ContainerDied","Data":"89c807fe94e2d65fbf130545b13085260dc17b2b0a2a6431552dcac2e6dda0b7"} Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.168948 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229171 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ceph\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229242 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-0\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229353 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-ceph-nova-0\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229387 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zbx6\" (UniqueName: \"kubernetes.io/projected/1573e690-1a23-4563-806d-8023f7d44c43-kube-api-access-8zbx6\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229410 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-1\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229431 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-1\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229452 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-nova-extra-config-0\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229500 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-0\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229527 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-inventory\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229601 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-custom-ceph-combined-ca-bundle\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.229635 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ssh-key-openstack-edpm-ipam\") pod \"1573e690-1a23-4563-806d-8023f7d44c43\" (UID: \"1573e690-1a23-4563-806d-8023f7d44c43\") " Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.240358 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.241387 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ceph" (OuterVolumeSpecName: "ceph") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.244074 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1573e690-1a23-4563-806d-8023f7d44c43-kube-api-access-8zbx6" (OuterVolumeSpecName: "kube-api-access-8zbx6") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "kube-api-access-8zbx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.262722 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.271799 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-inventory" (OuterVolumeSpecName: "inventory") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.273918 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.273905 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.274785 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.285610 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.288174 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.292493 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "1573e690-1a23-4563-806d-8023f7d44c43" (UID: "1573e690-1a23-4563-806d-8023f7d44c43"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332376 4820 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332429 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zbx6\" (UniqueName: \"kubernetes.io/projected/1573e690-1a23-4563-806d-8023f7d44c43-kube-api-access-8zbx6\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332446 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332460 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332476 4820 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1573e690-1a23-4563-806d-8023f7d44c43-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332489 4820 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332500 4820 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-inventory\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332514 4820 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332526 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332539 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.332552 4820 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1573e690-1a23-4563-806d-8023f7d44c43-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.664268 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.664589 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2" event={"ID":"1573e690-1a23-4563-806d-8023f7d44c43","Type":"ContainerDied","Data":"625f6c62092f4416de489ce3b8568bd7dc99b3f59f536c5fb2ebe39a1701fc53"} Feb 01 15:06:42 crc kubenswrapper[4820]: I0201 15:06:42.665032 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="625f6c62092f4416de489ce3b8568bd7dc99b3f59f536c5fb2ebe39a1701fc53" Feb 01 15:06:42 crc kubenswrapper[4820]: E0201 15:06:42.782109 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1573e690_1a23_4563_806d_8023f7d44c43.slice/crio-625f6c62092f4416de489ce3b8568bd7dc99b3f59f536c5fb2ebe39a1701fc53\": RecentStats: unable to find data in memory cache]" Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.242101 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.242616 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.242659 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.243258 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d61213b4a10a4ab74b8c67738ccbaf8ce69c525fdf88a8f53d56aa59cdd82b9"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.243320 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://0d61213b4a10a4ab74b8c67738ccbaf8ce69c525fdf88a8f53d56aa59cdd82b9" gracePeriod=600 Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.747616 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="0d61213b4a10a4ab74b8c67738ccbaf8ce69c525fdf88a8f53d56aa59cdd82b9" exitCode=0 Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.747699 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"0d61213b4a10a4ab74b8c67738ccbaf8ce69c525fdf88a8f53d56aa59cdd82b9"} Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.747936 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589"} Feb 01 15:06:49 crc kubenswrapper[4820]: I0201 15:06:49.747961 4820 scope.go:117] "RemoveContainer" containerID="6fda1663a73d8997cf7e3bdf24ee54461253a77e6525a8507f498da9a225ab23" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.893975 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 01 15:06:55 crc kubenswrapper[4820]: E0201 15:06:55.894975 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1573e690-1a23-4563-806d-8023f7d44c43" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.894995 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="1573e690-1a23-4563-806d-8023f7d44c43" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.895255 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="1573e690-1a23-4563-806d-8023f7d44c43" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.896371 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.906244 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.910748 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.911151 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.915446 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.917778 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.920555 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 01 15:06:55 crc kubenswrapper[4820]: I0201 15:06:55.931250 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021235 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2efc1e63-fe88-414b-accb-7a48e72f12d6-ceph\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021303 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021327 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021398 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-scripts\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021427 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021616 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021668 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfczg\" (UniqueName: \"kubernetes.io/projected/2efc1e63-fe88-414b-accb-7a48e72f12d6-kube-api-access-sfczg\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021727 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-run\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021753 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-config-data\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021771 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021817 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dabdc80-a117-4e53-9b1d-b8af575ba10f-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021867 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-dev\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.021897 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-sys\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022001 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022085 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-dev\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022115 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022136 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-sys\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022175 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022194 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkx4g\" (UniqueName: \"kubernetes.io/projected/8dabdc80-a117-4e53-9b1d-b8af575ba10f-kube-api-access-fkx4g\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022254 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022273 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-lib-modules\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022308 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022390 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-run\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022460 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022505 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022548 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022575 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.022690 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-run\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124161 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124184 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124206 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124220 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124242 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124294 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2efc1e63-fe88-414b-accb-7a48e72f12d6-ceph\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124312 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124378 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124398 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124420 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-scripts\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124434 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124477 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124494 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfczg\" (UniqueName: \"kubernetes.io/projected/2efc1e63-fe88-414b-accb-7a48e72f12d6-kube-api-access-sfczg\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124520 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-run\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124565 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-config-data\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124579 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124613 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124756 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124780 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124845 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.125554 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.125611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-run\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.125657 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.124627 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.125958 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dabdc80-a117-4e53-9b1d-b8af575ba10f-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.125978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.125995 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-dev\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126045 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-run\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126048 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-sys\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-sys\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126122 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126182 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-dev\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126209 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-sys\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126231 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126264 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126293 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkx4g\" (UniqueName: \"kubernetes.io/projected/8dabdc80-a117-4e53-9b1d-b8af575ba10f-kube-api-access-fkx4g\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126322 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126356 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-lib-modules\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126375 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126400 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-sys\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126624 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-dev\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.126379 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.128033 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.128117 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-dev\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.128424 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.128718 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.128860 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2efc1e63-fe88-414b-accb-7a48e72f12d6-lib-modules\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.128913 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dabdc80-a117-4e53-9b1d-b8af575ba10f-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.132918 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-config-data\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.140590 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.145704 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfczg\" (UniqueName: \"kubernetes.io/projected/2efc1e63-fe88-414b-accb-7a48e72f12d6-kube-api-access-sfczg\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.145900 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.146127 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.146270 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efc1e63-fe88-414b-accb-7a48e72f12d6-scripts\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.147409 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dabdc80-a117-4e53-9b1d-b8af575ba10f-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.147698 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.150294 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.150725 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkx4g\" (UniqueName: \"kubernetes.io/projected/8dabdc80-a117-4e53-9b1d-b8af575ba10f-kube-api-access-fkx4g\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.158072 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2efc1e63-fe88-414b-accb-7a48e72f12d6-ceph\") pod \"cinder-backup-0\" (UID: \"2efc1e63-fe88-414b-accb-7a48e72f12d6\") " pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.164522 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dabdc80-a117-4e53-9b1d-b8af575ba10f-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"8dabdc80-a117-4e53-9b1d-b8af575ba10f\") " pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.217640 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.230079 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.349295 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-7b9z7"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.350455 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7b9z7" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.363051 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-7b9z7"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.436922 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-operator-scripts\") pod \"manila-db-create-7b9z7\" (UID: \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\") " pod="openstack/manila-db-create-7b9z7" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.437279 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2476\" (UniqueName: \"kubernetes.io/projected/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-kube-api-access-s2476\") pod \"manila-db-create-7b9z7\" (UID: \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\") " pod="openstack/manila-db-create-7b9z7" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.475199 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-3bac-account-create-update-g2ggq"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.477553 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.480488 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.489274 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3bac-account-create-update-g2ggq"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.539305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f978a2-b4d7-4e2c-83f1-41778effd23c-operator-scripts\") pod \"manila-3bac-account-create-update-g2ggq\" (UID: \"09f978a2-b4d7-4e2c-83f1-41778effd23c\") " pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.539454 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-operator-scripts\") pod \"manila-db-create-7b9z7\" (UID: \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\") " pod="openstack/manila-db-create-7b9z7" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.539478 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mm6f\" (UniqueName: \"kubernetes.io/projected/09f978a2-b4d7-4e2c-83f1-41778effd23c-kube-api-access-8mm6f\") pod \"manila-3bac-account-create-update-g2ggq\" (UID: \"09f978a2-b4d7-4e2c-83f1-41778effd23c\") " pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.539509 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2476\" (UniqueName: \"kubernetes.io/projected/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-kube-api-access-s2476\") pod \"manila-db-create-7b9z7\" (UID: \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\") " pod="openstack/manila-db-create-7b9z7" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.540990 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-operator-scripts\") pod \"manila-db-create-7b9z7\" (UID: \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\") " pod="openstack/manila-db-create-7b9z7" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.585411 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5897584f9c-s8gwr"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.587488 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.621597 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9g7pn" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.623274 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2476\" (UniqueName: \"kubernetes.io/projected/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-kube-api-access-s2476\") pod \"manila-db-create-7b9z7\" (UID: \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\") " pod="openstack/manila-db-create-7b9z7" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.624165 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.624574 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.624931 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.636033 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5897584f9c-s8gwr"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.640132 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mm6f\" (UniqueName: \"kubernetes.io/projected/09f978a2-b4d7-4e2c-83f1-41778effd23c-kube-api-access-8mm6f\") pod \"manila-3bac-account-create-update-g2ggq\" (UID: \"09f978a2-b4d7-4e2c-83f1-41778effd23c\") " pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.640173 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkwd\" (UniqueName: \"kubernetes.io/projected/90ffe183-d2c6-4914-9c7b-faedde2e565a-kube-api-access-wdkwd\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.640194 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-scripts\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.640241 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90ffe183-d2c6-4914-9c7b-faedde2e565a-horizon-secret-key\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.640258 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f978a2-b4d7-4e2c-83f1-41778effd23c-operator-scripts\") pod \"manila-3bac-account-create-update-g2ggq\" (UID: \"09f978a2-b4d7-4e2c-83f1-41778effd23c\") " pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.640286 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-config-data\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.640305 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90ffe183-d2c6-4914-9c7b-faedde2e565a-logs\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.642689 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f978a2-b4d7-4e2c-83f1-41778effd23c-operator-scripts\") pod \"manila-3bac-account-create-update-g2ggq\" (UID: \"09f978a2-b4d7-4e2c-83f1-41778effd23c\") " pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.666104 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.667606 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.673424 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7b9z7" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.680909 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.681061 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.681154 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gl8s8" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.681595 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.692555 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mm6f\" (UniqueName: \"kubernetes.io/projected/09f978a2-b4d7-4e2c-83f1-41778effd23c-kube-api-access-8mm6f\") pod \"manila-3bac-account-create-update-g2ggq\" (UID: \"09f978a2-b4d7-4e2c-83f1-41778effd23c\") " pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.705330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.723950 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8445bf989c-rxj4q"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.725990 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.747341 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkwd\" (UniqueName: \"kubernetes.io/projected/90ffe183-d2c6-4914-9c7b-faedde2e565a-kube-api-access-wdkwd\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.747647 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-scripts\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.747757 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90ffe183-d2c6-4914-9c7b-faedde2e565a-horizon-secret-key\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.747865 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-config-data\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.747973 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90ffe183-d2c6-4914-9c7b-faedde2e565a-logs\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.748568 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90ffe183-d2c6-4914-9c7b-faedde2e565a-logs\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.750530 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-scripts\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.751109 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-config-data\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.760208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90ffe183-d2c6-4914-9c7b-faedde2e565a-horizon-secret-key\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.771467 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8445bf989c-rxj4q"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.775949 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkwd\" (UniqueName: \"kubernetes.io/projected/90ffe183-d2c6-4914-9c7b-faedde2e565a-kube-api-access-wdkwd\") pod \"horizon-5897584f9c-s8gwr\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850493 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850527 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-config-data\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850557 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-ceph\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850608 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lf9\" (UniqueName: \"kubernetes.io/projected/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-kube-api-access-f2lf9\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850627 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850662 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-horizon-secret-key\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850710 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26nt7\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-kube-api-access-26nt7\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850766 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-logs\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850782 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850816 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-scripts\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850836 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-logs\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.850865 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.857645 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.898957 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.900775 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.914941 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.915089 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.918708 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.951355 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.953937 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-logs\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.953998 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954058 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954088 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954114 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-config-data\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954131 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-ceph\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954160 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954187 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2lf9\" (UniqueName: \"kubernetes.io/projected/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-kube-api-access-f2lf9\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954211 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954251 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-horizon-secret-key\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954296 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26nt7\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-kube-api-access-26nt7\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954358 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-logs\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954376 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954388 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-logs\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.954404 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-scripts\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.955065 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.956227 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-config-data\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: E0201 15:06:56.963378 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-f9n7w logs scripts], unattached volumes=[], failed to process volumes=[ceph combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-f9n7w logs scripts]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.964039 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-logs\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.964306 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-scripts\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.964366 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.971282 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.977382 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.977567 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.977938 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.994793 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26nt7\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-kube-api-access-26nt7\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:56 crc kubenswrapper[4820]: I0201 15:06:56.997322 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-horizon-secret-key\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.000509 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-ceph\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.015742 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.022067 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " pod="openstack/glance-default-external-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.023493 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2lf9\" (UniqueName: \"kubernetes.io/projected/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-kube-api-access-f2lf9\") pod \"horizon-8445bf989c-rxj4q\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.059841 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.059931 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.059972 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.060026 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.060142 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.060210 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.060252 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.060311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9n7w\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-kube-api-access-f9n7w\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.060345 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.062859 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169156 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169273 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9n7w\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-kube-api-access-f9n7w\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169371 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169392 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169440 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169593 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.169665 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.178497 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.179138 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.179416 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.179646 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.179688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.181236 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.192020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.192531 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.222000 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9n7w\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-kube-api-access-f9n7w\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.238492 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.247556 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.293849 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.514505 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 01 15:06:57 crc kubenswrapper[4820]: W0201 15:06:57.521267 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7be6fa9_d20e_4b1c_82c4_4a13ddde938e.slice/crio-b1d9d5086c9487a4f085fb3429bbbd8b73e5ecf2962f8a9fc2cb0fd669af4161 WatchSource:0}: Error finding container b1d9d5086c9487a4f085fb3429bbbd8b73e5ecf2962f8a9fc2cb0fd669af4161: Status 404 returned error can't find the container with id b1d9d5086c9487a4f085fb3429bbbd8b73e5ecf2962f8a9fc2cb0fd669af4161 Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.523583 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-7b9z7"] Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.702418 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3bac-account-create-update-g2ggq"] Feb 01 15:06:57 crc kubenswrapper[4820]: W0201 15:06:57.724589 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09f978a2_b4d7_4e2c_83f1_41778effd23c.slice/crio-e41bb1aab8bf9d58938898ec2c1f2c9f1e18991c546ae9f9b5144ff0c6b4ae96 WatchSource:0}: Error finding container e41bb1aab8bf9d58938898ec2c1f2c9f1e18991c546ae9f9b5144ff0c6b4ae96: Status 404 returned error can't find the container with id e41bb1aab8bf9d58938898ec2c1f2c9f1e18991c546ae9f9b5144ff0c6b4ae96 Feb 01 15:06:57 crc kubenswrapper[4820]: W0201 15:06:57.856311 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90ffe183_d2c6_4914_9c7b_faedde2e565a.slice/crio-639fc652b76e01ec30121dc6bfbd7f6db6a922d791a8ba00c600f520a8580e22 WatchSource:0}: Error finding container 639fc652b76e01ec30121dc6bfbd7f6db6a922d791a8ba00c600f520a8580e22: Status 404 returned error can't find the container with id 639fc652b76e01ec30121dc6bfbd7f6db6a922d791a8ba00c600f520a8580e22 Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.856471 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"8dabdc80-a117-4e53-9b1d-b8af575ba10f","Type":"ContainerStarted","Data":"fe305e4e3a21b556bd5f1cd3fbe1f0fd5b5c9e8aeaea3699c848883fa0795e21"} Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.857864 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2efc1e63-fe88-414b-accb-7a48e72f12d6","Type":"ContainerStarted","Data":"b42385923a35e3509a9c1577aa22005eb4436fe84ae333f7a05f38213931cd60"} Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.858889 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3bac-account-create-update-g2ggq" event={"ID":"09f978a2-b4d7-4e2c-83f1-41778effd23c","Type":"ContainerStarted","Data":"e41bb1aab8bf9d58938898ec2c1f2c9f1e18991c546ae9f9b5144ff0c6b4ae96"} Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.861307 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.861375 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7b9z7" event={"ID":"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e","Type":"ContainerStarted","Data":"53a286d16a6a50f100e976574934616e8dd87993d88d1ee5c1bf3d4365b9c67b"} Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.861420 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7b9z7" event={"ID":"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e","Type":"ContainerStarted","Data":"b1d9d5086c9487a4f085fb3429bbbd8b73e5ecf2962f8a9fc2cb0fd669af4161"} Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.861433 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5897584f9c-s8gwr"] Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.891024 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.897488 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-7b9z7" podStartSLOduration=1.897465084 podStartE2EDuration="1.897465084s" podCreationTimestamp="2026-02-01 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:06:57.887985587 +0000 UTC m=+2759.408351871" watchObservedRunningTime="2026-02-01 15:06:57.897465084 +0000 UTC m=+2759.417831368" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.939104 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8445bf989c-rxj4q"] Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992163 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-combined-ca-bundle\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992579 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-scripts\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992642 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9n7w\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-kube-api-access-f9n7w\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992722 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-ceph\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992746 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-internal-tls-certs\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992812 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992828 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-logs\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992926 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-httpd-run\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.992967 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-config-data\") pod \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\" (UID: \"6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f\") " Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.995419 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:06:57 crc kubenswrapper[4820]: I0201 15:06:57.996653 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-logs" (OuterVolumeSpecName: "logs") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.006293 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-ceph" (OuterVolumeSpecName: "ceph") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.006729 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-config-data" (OuterVolumeSpecName: "config-data") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.006811 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-kube-api-access-f9n7w" (OuterVolumeSpecName: "kube-api-access-f9n7w") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "kube-api-access-f9n7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.008763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.009392 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.009900 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.013322 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-scripts" (OuterVolumeSpecName: "scripts") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.013489 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" (UID: "6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.096732 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.096905 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.097105 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.097222 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-logs\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.101136 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.101322 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.101448 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.101547 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.101670 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9n7w\" (UniqueName: \"kubernetes.io/projected/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f-kube-api-access-f9n7w\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.120694 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.203925 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.883782 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e936f170-22b7-4e20-a808-57ab3e2cd6a7","Type":"ContainerStarted","Data":"722d4d41372f171b9c56dfa3b5ab75b5f305f37fc38c071c995c4cc10c0d42db"} Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.885922 4820 generic.go:334] "Generic (PLEG): container finished" podID="09f978a2-b4d7-4e2c-83f1-41778effd23c" containerID="d897d2b03ec08336250a7e9f2ac34f98a958cdcf4339a51c3f17e1923c004a95" exitCode=0 Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.886007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3bac-account-create-update-g2ggq" event={"ID":"09f978a2-b4d7-4e2c-83f1-41778effd23c","Type":"ContainerDied","Data":"d897d2b03ec08336250a7e9f2ac34f98a958cdcf4339a51c3f17e1923c004a95"} Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.896792 4820 generic.go:334] "Generic (PLEG): container finished" podID="c7be6fa9-d20e-4b1c-82c4-4a13ddde938e" containerID="53a286d16a6a50f100e976574934616e8dd87993d88d1ee5c1bf3d4365b9c67b" exitCode=0 Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.896838 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7b9z7" event={"ID":"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e","Type":"ContainerDied","Data":"53a286d16a6a50f100e976574934616e8dd87993d88d1ee5c1bf3d4365b9c67b"} Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.900672 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8445bf989c-rxj4q" event={"ID":"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc","Type":"ContainerStarted","Data":"a1cd13aacaee1cf1f1997f266cbacf860f8e83f4738c3d58ea3d08f7478a23f6"} Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.927908 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"8dabdc80-a117-4e53-9b1d-b8af575ba10f","Type":"ContainerStarted","Data":"441c797d0ce4ad17d9b833c5e178797841fceb91e67965111d71a311c55efd52"} Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.927949 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"8dabdc80-a117-4e53-9b1d-b8af575ba10f","Type":"ContainerStarted","Data":"3359c2fe73d0408e71588af579a91059ec0cb570c59a9dfb818296294a33a0db"} Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.935307 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5897584f9c-s8gwr" event={"ID":"90ffe183-d2c6-4914-9c7b-faedde2e565a","Type":"ContainerStarted","Data":"639fc652b76e01ec30121dc6bfbd7f6db6a922d791a8ba00c600f520a8580e22"} Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.938552 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:06:58 crc kubenswrapper[4820]: I0201 15:06:58.958391 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.120878973 podStartE2EDuration="3.958376649s" podCreationTimestamp="2026-02-01 15:06:55 +0000 UTC" firstStartedPulling="2026-02-01 15:06:57.261077161 +0000 UTC m=+2758.781443445" lastFinishedPulling="2026-02-01 15:06:58.098574837 +0000 UTC m=+2759.618941121" observedRunningTime="2026-02-01 15:06:58.954778222 +0000 UTC m=+2760.475144496" watchObservedRunningTime="2026-02-01 15:06:58.958376649 +0000 UTC m=+2760.478742923" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.012415 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.017372 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.025596 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.027123 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.041052 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.041234 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.053303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147524 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147565 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147590 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147616 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147651 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-logs\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkccl\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-kube-api-access-zkccl\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147703 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.147793 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-ceph\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.223692 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f" path="/var/lib/kubelet/pods/6f0676cb-a1fe-4000-a7ee-6d681d2d3c6f/volumes" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.249747 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-logs\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250274 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkccl\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-kube-api-access-zkccl\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250311 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250395 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250415 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-ceph\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250486 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250515 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250556 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250557 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-logs\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.250930 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.251100 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.254314 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.266779 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.274088 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.281506 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.284631 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.284634 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkccl\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-kube-api-access-zkccl\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.288368 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-ceph\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.301776 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.303791 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.378636 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.562418 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5897584f9c-s8gwr"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.605354 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79c9fd7b88-c7gn4"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.606982 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.613618 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.630508 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79c9fd7b88-c7gn4"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.673511 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.727598 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8445bf989c-rxj4q"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.763613 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69c64959b6-498kr"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.765607 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.772695 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.787594 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c64959b6-498kr"] Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.790443 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjlb\" (UniqueName: \"kubernetes.io/projected/9551e678-9809-43e8-8ea2-33c7b873f076-kube-api-access-fdjlb\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.790561 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-scripts\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.790597 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9551e678-9809-43e8-8ea2-33c7b873f076-logs\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.790639 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-tls-certs\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.790690 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-secret-key\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.790715 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-config-data\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.790773 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-combined-ca-bundle\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.891849 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-tls-certs\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.891930 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-horizon-tls-certs\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.891947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-combined-ca-bundle\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.891966 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-secret-key\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.891986 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-config-data\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892013 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-horizon-secret-key\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892030 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-scripts\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892065 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-combined-ca-bundle\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892103 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvtt7\" (UniqueName: \"kubernetes.io/projected/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-kube-api-access-wvtt7\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892126 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjlb\" (UniqueName: \"kubernetes.io/projected/9551e678-9809-43e8-8ea2-33c7b873f076-kube-api-access-fdjlb\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892144 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-config-data\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892190 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-logs\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892214 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-scripts\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892237 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9551e678-9809-43e8-8ea2-33c7b873f076-logs\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.892564 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9551e678-9809-43e8-8ea2-33c7b873f076-logs\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.898309 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-secret-key\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.902584 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-config-data\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.909593 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-combined-ca-bundle\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.909883 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-tls-certs\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.910156 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-scripts\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:06:59 crc kubenswrapper[4820]: I0201 15:06:59.980588 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjlb\" (UniqueName: \"kubernetes.io/projected/9551e678-9809-43e8-8ea2-33c7b873f076-kube-api-access-fdjlb\") pod \"horizon-79c9fd7b88-c7gn4\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:06:59.997782 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2efc1e63-fe88-414b-accb-7a48e72f12d6","Type":"ContainerStarted","Data":"f5ad0eac3ed4879ea598cdc736e2e688ee3fd97641ce05367eb1d013755e2c87"} Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:06:59.997863 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2efc1e63-fe88-414b-accb-7a48e72f12d6","Type":"ContainerStarted","Data":"712af1d7df157355f083ed5e3f8dca2fec4b8ea0bbeebbe2ea5bfadb98266e20"} Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.000182 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvtt7\" (UniqueName: \"kubernetes.io/projected/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-kube-api-access-wvtt7\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.000233 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-config-data\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.000297 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-logs\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.000389 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-horizon-tls-certs\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.000411 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-combined-ca-bundle\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.000459 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-scripts\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.000481 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-horizon-secret-key\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.001740 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-logs\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.003243 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-config-data\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.003707 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-scripts\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.006208 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-combined-ca-bundle\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.021310 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-horizon-tls-certs\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.023465 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-horizon-secret-key\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.028953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e936f170-22b7-4e20-a808-57ab3e2cd6a7","Type":"ContainerStarted","Data":"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6"} Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.046441 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvtt7\" (UniqueName: \"kubernetes.io/projected/35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18-kube-api-access-wvtt7\") pod \"horizon-69c64959b6-498kr\" (UID: \"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18\") " pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.136625 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.261288 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.362365 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.380134835 podStartE2EDuration="5.362348602s" podCreationTimestamp="2026-02-01 15:06:55 +0000 UTC" firstStartedPulling="2026-02-01 15:06:57.520331229 +0000 UTC m=+2759.040697513" lastFinishedPulling="2026-02-01 15:06:58.502544996 +0000 UTC m=+2760.022911280" observedRunningTime="2026-02-01 15:07:00.064378705 +0000 UTC m=+2761.584744989" watchObservedRunningTime="2026-02-01 15:07:00.362348602 +0000 UTC m=+2761.882714886" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.369046 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:07:00 crc kubenswrapper[4820]: W0201 15:07:00.447588 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25cdb207_36a8_4d05_aae6_e46de6f2dd09.slice/crio-964f593b09e597aac62f5b51cd12ddc2eb834ce038c9ae979b0972a4ca163b43 WatchSource:0}: Error finding container 964f593b09e597aac62f5b51cd12ddc2eb834ce038c9ae979b0972a4ca163b43: Status 404 returned error can't find the container with id 964f593b09e597aac62f5b51cd12ddc2eb834ce038c9ae979b0972a4ca163b43 Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.865526 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:07:00 crc kubenswrapper[4820]: I0201 15:07:00.888490 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7b9z7" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.048749 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mm6f\" (UniqueName: \"kubernetes.io/projected/09f978a2-b4d7-4e2c-83f1-41778effd23c-kube-api-access-8mm6f\") pod \"09f978a2-b4d7-4e2c-83f1-41778effd23c\" (UID: \"09f978a2-b4d7-4e2c-83f1-41778effd23c\") " Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.048838 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2476\" (UniqueName: \"kubernetes.io/projected/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-kube-api-access-s2476\") pod \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\" (UID: \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\") " Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.049045 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-operator-scripts\") pod \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\" (UID: \"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e\") " Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.049153 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f978a2-b4d7-4e2c-83f1-41778effd23c-operator-scripts\") pod \"09f978a2-b4d7-4e2c-83f1-41778effd23c\" (UID: \"09f978a2-b4d7-4e2c-83f1-41778effd23c\") " Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.050401 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f978a2-b4d7-4e2c-83f1-41778effd23c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09f978a2-b4d7-4e2c-83f1-41778effd23c" (UID: "09f978a2-b4d7-4e2c-83f1-41778effd23c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.052320 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7be6fa9-d20e-4b1c-82c4-4a13ddde938e" (UID: "c7be6fa9-d20e-4b1c-82c4-4a13ddde938e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.058090 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-kube-api-access-s2476" (OuterVolumeSpecName: "kube-api-access-s2476") pod "c7be6fa9-d20e-4b1c-82c4-4a13ddde938e" (UID: "c7be6fa9-d20e-4b1c-82c4-4a13ddde938e"). InnerVolumeSpecName "kube-api-access-s2476". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.064249 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f978a2-b4d7-4e2c-83f1-41778effd23c-kube-api-access-8mm6f" (OuterVolumeSpecName: "kube-api-access-8mm6f") pod "09f978a2-b4d7-4e2c-83f1-41778effd23c" (UID: "09f978a2-b4d7-4e2c-83f1-41778effd23c"). InnerVolumeSpecName "kube-api-access-8mm6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.078579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e936f170-22b7-4e20-a808-57ab3e2cd6a7","Type":"ContainerStarted","Data":"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0"} Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.078752 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerName="glance-log" containerID="cri-o://6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6" gracePeriod=30 Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.079238 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerName="glance-httpd" containerID="cri-o://12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0" gracePeriod=30 Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.127850 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25cdb207-36a8-4d05-aae6-e46de6f2dd09","Type":"ContainerStarted","Data":"964f593b09e597aac62f5b51cd12ddc2eb834ce038c9ae979b0972a4ca163b43"} Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.128135 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.128092277 podStartE2EDuration="5.128092277s" podCreationTimestamp="2026-02-01 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:07:01.118170289 +0000 UTC m=+2762.638536573" watchObservedRunningTime="2026-02-01 15:07:01.128092277 +0000 UTC m=+2762.648458561" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.135130 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3bac-account-create-update-g2ggq" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.135141 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3bac-account-create-update-g2ggq" event={"ID":"09f978a2-b4d7-4e2c-83f1-41778effd23c","Type":"ContainerDied","Data":"e41bb1aab8bf9d58938898ec2c1f2c9f1e18991c546ae9f9b5144ff0c6b4ae96"} Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.135205 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e41bb1aab8bf9d58938898ec2c1f2c9f1e18991c546ae9f9b5144ff0c6b4ae96" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.141057 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7b9z7" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.141797 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7b9z7" event={"ID":"c7be6fa9-d20e-4b1c-82c4-4a13ddde938e","Type":"ContainerDied","Data":"b1d9d5086c9487a4f085fb3429bbbd8b73e5ecf2962f8a9fc2cb0fd669af4161"} Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.141847 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d9d5086c9487a4f085fb3429bbbd8b73e5ecf2962f8a9fc2cb0fd669af4161" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.173464 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f978a2-b4d7-4e2c-83f1-41778effd23c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.173500 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mm6f\" (UniqueName: \"kubernetes.io/projected/09f978a2-b4d7-4e2c-83f1-41778effd23c-kube-api-access-8mm6f\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.173513 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2476\" (UniqueName: \"kubernetes.io/projected/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-kube-api-access-s2476\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.173523 4820 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.345299 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.345347 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.403634 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c64959b6-498kr"] Feb 01 15:07:01 crc kubenswrapper[4820]: I0201 15:07:01.429181 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79c9fd7b88-c7gn4"] Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.046241 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.155154 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c64959b6-498kr" event={"ID":"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18","Type":"ContainerStarted","Data":"8f4a14a52694518183209b7437e0a4dc77972c0fab6e51fcedd2ef1372b5ce3f"} Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.159743 4820 generic.go:334] "Generic (PLEG): container finished" podID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerID="12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0" exitCode=0 Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.159778 4820 generic.go:334] "Generic (PLEG): container finished" podID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerID="6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6" exitCode=143 Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.159817 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e936f170-22b7-4e20-a808-57ab3e2cd6a7","Type":"ContainerDied","Data":"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0"} Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.159845 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e936f170-22b7-4e20-a808-57ab3e2cd6a7","Type":"ContainerDied","Data":"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6"} Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.159855 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e936f170-22b7-4e20-a808-57ab3e2cd6a7","Type":"ContainerDied","Data":"722d4d41372f171b9c56dfa3b5ab75b5f305f37fc38c071c995c4cc10c0d42db"} Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.159871 4820 scope.go:117] "RemoveContainer" containerID="12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.160033 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.165573 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25cdb207-36a8-4d05-aae6-e46de6f2dd09","Type":"ContainerStarted","Data":"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51"} Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.175099 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c9fd7b88-c7gn4" event={"ID":"9551e678-9809-43e8-8ea2-33c7b873f076","Type":"ContainerStarted","Data":"b40c40d443c2860f8d5336bc9137602f17d0538268b1e475f66c829e059bdf4d"} Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209392 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-public-tls-certs\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209468 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-combined-ca-bundle\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209508 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209602 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-logs\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209630 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-scripts\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209677 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26nt7\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-kube-api-access-26nt7\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209777 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-httpd-run\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209824 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-config-data\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.209853 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-ceph\") pod \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\" (UID: \"e936f170-22b7-4e20-a808-57ab3e2cd6a7\") " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.213603 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.215219 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-logs" (OuterVolumeSpecName: "logs") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.218027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-scripts" (OuterVolumeSpecName: "scripts") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.218811 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-kube-api-access-26nt7" (OuterVolumeSpecName: "kube-api-access-26nt7") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "kube-api-access-26nt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.220025 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.232116 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-ceph" (OuterVolumeSpecName: "ceph") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.251542 4820 scope.go:117] "RemoveContainer" containerID="6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.264135 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.280370 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-config-data" (OuterVolumeSpecName: "config-data") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.312451 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.312507 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.312519 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-logs\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.312529 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.312538 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26nt7\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-kube-api-access-26nt7\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.312547 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e936f170-22b7-4e20-a808-57ab3e2cd6a7-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.312556 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.312563 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e936f170-22b7-4e20-a808-57ab3e2cd6a7-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.318776 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e936f170-22b7-4e20-a808-57ab3e2cd6a7" (UID: "e936f170-22b7-4e20-a808-57ab3e2cd6a7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.335844 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.414120 4820 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e936f170-22b7-4e20-a808-57ab3e2cd6a7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.414151 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.419373 4820 scope.go:117] "RemoveContainer" containerID="12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0" Feb 01 15:07:02 crc kubenswrapper[4820]: E0201 15:07:02.420189 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0\": container with ID starting with 12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0 not found: ID does not exist" containerID="12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.420239 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0"} err="failed to get container status \"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0\": rpc error: code = NotFound desc = could not find container \"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0\": container with ID starting with 12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0 not found: ID does not exist" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.420270 4820 scope.go:117] "RemoveContainer" containerID="6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6" Feb 01 15:07:02 crc kubenswrapper[4820]: E0201 15:07:02.420643 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6\": container with ID starting with 6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6 not found: ID does not exist" containerID="6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.420706 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6"} err="failed to get container status \"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6\": rpc error: code = NotFound desc = could not find container \"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6\": container with ID starting with 6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6 not found: ID does not exist" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.420753 4820 scope.go:117] "RemoveContainer" containerID="12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.422153 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0"} err="failed to get container status \"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0\": rpc error: code = NotFound desc = could not find container \"12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0\": container with ID starting with 12f2e6b78ce323643759b0db6f4867d7c449c0d322c2416cb5a6741f1d1082d0 not found: ID does not exist" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.422185 4820 scope.go:117] "RemoveContainer" containerID="6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.424648 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6"} err="failed to get container status \"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6\": rpc error: code = NotFound desc = could not find container \"6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6\": container with ID starting with 6eaa92f6e1d5093605dca65cae4c4459d077549b118e100f058331d0475d9bf6 not found: ID does not exist" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.500306 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.514305 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.534277 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:07:02 crc kubenswrapper[4820]: E0201 15:07:02.534810 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f978a2-b4d7-4e2c-83f1-41778effd23c" containerName="mariadb-account-create-update" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.534826 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f978a2-b4d7-4e2c-83f1-41778effd23c" containerName="mariadb-account-create-update" Feb 01 15:07:02 crc kubenswrapper[4820]: E0201 15:07:02.534859 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerName="glance-log" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.534867 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerName="glance-log" Feb 01 15:07:02 crc kubenswrapper[4820]: E0201 15:07:02.534909 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerName="glance-httpd" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.534917 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerName="glance-httpd" Feb 01 15:07:02 crc kubenswrapper[4820]: E0201 15:07:02.534929 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7be6fa9-d20e-4b1c-82c4-4a13ddde938e" containerName="mariadb-database-create" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.534937 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7be6fa9-d20e-4b1c-82c4-4a13ddde938e" containerName="mariadb-database-create" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.535171 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7be6fa9-d20e-4b1c-82c4-4a13ddde938e" containerName="mariadb-database-create" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.535193 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f978a2-b4d7-4e2c-83f1-41778effd23c" containerName="mariadb-account-create-update" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.535208 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerName="glance-log" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.535227 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" containerName="glance-httpd" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.536798 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.540024 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.540270 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.571908 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.730556 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.730741 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-config-data\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.730785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.730847 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.731063 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-scripts\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.731239 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/939be59c-3ae2-4a0e-ade1-c491cb03289e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.731536 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/939be59c-3ae2-4a0e-ade1-c491cb03289e-ceph\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.731635 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/939be59c-3ae2-4a0e-ade1-c491cb03289e-logs\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.731896 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ktq\" (UniqueName: \"kubernetes.io/projected/939be59c-3ae2-4a0e-ade1-c491cb03289e-kube-api-access-l4ktq\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834359 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834424 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-scripts\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834462 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/939be59c-3ae2-4a0e-ade1-c491cb03289e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834514 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/939be59c-3ae2-4a0e-ade1-c491cb03289e-ceph\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834545 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/939be59c-3ae2-4a0e-ade1-c491cb03289e-logs\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834615 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4ktq\" (UniqueName: \"kubernetes.io/projected/939be59c-3ae2-4a0e-ade1-c491cb03289e-kube-api-access-l4ktq\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834650 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834696 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-config-data\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.834729 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.835281 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/939be59c-3ae2-4a0e-ade1-c491cb03289e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.835367 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.835369 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/939be59c-3ae2-4a0e-ade1-c491cb03289e-logs\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.839468 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/939be59c-3ae2-4a0e-ade1-c491cb03289e-ceph\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.839693 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.842760 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.844027 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-config-data\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.850391 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/939be59c-3ae2-4a0e-ade1-c491cb03289e-scripts\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.856232 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4ktq\" (UniqueName: \"kubernetes.io/projected/939be59c-3ae2-4a0e-ade1-c491cb03289e-kube-api-access-l4ktq\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.876047 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"939be59c-3ae2-4a0e-ade1-c491cb03289e\") " pod="openstack/glance-default-external-api-0" Feb 01 15:07:02 crc kubenswrapper[4820]: I0201 15:07:02.888597 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 01 15:07:03 crc kubenswrapper[4820]: I0201 15:07:03.221040 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerName="glance-log" containerID="cri-o://e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51" gracePeriod=30 Feb 01 15:07:03 crc kubenswrapper[4820]: I0201 15:07:03.222040 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerName="glance-httpd" containerID="cri-o://8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1" gracePeriod=30 Feb 01 15:07:03 crc kubenswrapper[4820]: I0201 15:07:03.223251 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e936f170-22b7-4e20-a808-57ab3e2cd6a7" path="/var/lib/kubelet/pods/e936f170-22b7-4e20-a808-57ab3e2cd6a7/volumes" Feb 01 15:07:03 crc kubenswrapper[4820]: I0201 15:07:03.224413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25cdb207-36a8-4d05-aae6-e46de6f2dd09","Type":"ContainerStarted","Data":"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1"} Feb 01 15:07:03 crc kubenswrapper[4820]: I0201 15:07:03.253625 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.253593775 podStartE2EDuration="5.253593775s" podCreationTimestamp="2026-02-01 15:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:07:03.240743577 +0000 UTC m=+2764.761109861" watchObservedRunningTime="2026-02-01 15:07:03.253593775 +0000 UTC m=+2764.773960059" Feb 01 15:07:03 crc kubenswrapper[4820]: E0201 15:07:03.504711 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25cdb207_36a8_4d05_aae6_e46de6f2dd09.slice/crio-conmon-e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25cdb207_36a8_4d05_aae6_e46de6f2dd09.slice/crio-e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25cdb207_36a8_4d05_aae6_e46de6f2dd09.slice/crio-8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1.scope\": RecentStats: unable to find data in memory cache]" Feb 01 15:07:03 crc kubenswrapper[4820]: I0201 15:07:03.596719 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 01 15:07:03 crc kubenswrapper[4820]: I0201 15:07:03.984434 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071540 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-internal-tls-certs\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071596 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-config-data\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071652 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-combined-ca-bundle\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071676 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkccl\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-kube-api-access-zkccl\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071734 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-httpd-run\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071768 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-scripts\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071857 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071970 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-logs\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.071990 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-ceph\") pod \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\" (UID: \"25cdb207-36a8-4d05-aae6-e46de6f2dd09\") " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.082290 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.085154 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-logs" (OuterVolumeSpecName: "logs") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.098336 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.098768 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-kube-api-access-zkccl" (OuterVolumeSpecName: "kube-api-access-zkccl") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "kube-api-access-zkccl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.131247 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-ceph" (OuterVolumeSpecName: "ceph") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.131444 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-scripts" (OuterVolumeSpecName: "scripts") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.176677 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkccl\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-kube-api-access-zkccl\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.176712 4820 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.176722 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.176743 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.176753 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25cdb207-36a8-4d05-aae6-e46de6f2dd09-logs\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.176763 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/25cdb207-36a8-4d05-aae6-e46de6f2dd09-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.243251 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.254535 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.279604 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.279631 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.280036 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-config-data" (OuterVolumeSpecName: "config-data") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.304149 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"939be59c-3ae2-4a0e-ade1-c491cb03289e","Type":"ContainerStarted","Data":"d4d048ef65ff62cfc483eea083d941183afbb6110676397ec4421a705249b2cd"} Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.332478 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25cdb207-36a8-4d05-aae6-e46de6f2dd09" (UID: "25cdb207-36a8-4d05-aae6-e46de6f2dd09"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.334838 4820 generic.go:334] "Generic (PLEG): container finished" podID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerID="8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1" exitCode=0 Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.334888 4820 generic.go:334] "Generic (PLEG): container finished" podID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerID="e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51" exitCode=143 Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.334911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25cdb207-36a8-4d05-aae6-e46de6f2dd09","Type":"ContainerDied","Data":"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1"} Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.334943 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25cdb207-36a8-4d05-aae6-e46de6f2dd09","Type":"ContainerDied","Data":"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51"} Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.334953 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25cdb207-36a8-4d05-aae6-e46de6f2dd09","Type":"ContainerDied","Data":"964f593b09e597aac62f5b51cd12ddc2eb834ce038c9ae979b0972a4ca163b43"} Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.334968 4820 scope.go:117] "RemoveContainer" containerID="8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.335100 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.381029 4820 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.381062 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25cdb207-36a8-4d05-aae6-e46de6f2dd09-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.414055 4820 scope.go:117] "RemoveContainer" containerID="e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.419317 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.427616 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.437124 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:07:04 crc kubenswrapper[4820]: E0201 15:07:04.437542 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerName="glance-httpd" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.437563 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerName="glance-httpd" Feb 01 15:07:04 crc kubenswrapper[4820]: E0201 15:07:04.437590 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerName="glance-log" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.437597 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerName="glance-log" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.437775 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerName="glance-httpd" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.437807 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" containerName="glance-log" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.439545 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.444060 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.445506 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.450421 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.490270 4820 scope.go:117] "RemoveContainer" containerID="8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1" Feb 01 15:07:04 crc kubenswrapper[4820]: E0201 15:07:04.490797 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1\": container with ID starting with 8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1 not found: ID does not exist" containerID="8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.490841 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1"} err="failed to get container status \"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1\": rpc error: code = NotFound desc = could not find container \"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1\": container with ID starting with 8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1 not found: ID does not exist" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.490895 4820 scope.go:117] "RemoveContainer" containerID="e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51" Feb 01 15:07:04 crc kubenswrapper[4820]: E0201 15:07:04.491393 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51\": container with ID starting with e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51 not found: ID does not exist" containerID="e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.491424 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51"} err="failed to get container status \"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51\": rpc error: code = NotFound desc = could not find container \"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51\": container with ID starting with e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51 not found: ID does not exist" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.491443 4820 scope.go:117] "RemoveContainer" containerID="8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.491864 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1"} err="failed to get container status \"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1\": rpc error: code = NotFound desc = could not find container \"8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1\": container with ID starting with 8b6f4e9fdb6140a78ef0de1b16129560fac90aff290ea9d447fed7cfb2d5c9b1 not found: ID does not exist" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.491920 4820 scope.go:117] "RemoveContainer" containerID="e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.492382 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51"} err="failed to get container status \"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51\": rpc error: code = NotFound desc = could not find container \"e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51\": container with ID starting with e36bfd305f03efc8f0be35557d8c6590d45715b061cbe6652f283cb673d93f51 not found: ID does not exist" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.585674 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzs8h\" (UniqueName: \"kubernetes.io/projected/c528ece8-d372-4479-b08e-cd5e12306def-kube-api-access-lzs8h\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.585947 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.586177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.586312 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c528ece8-d372-4479-b08e-cd5e12306def-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.586452 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.586585 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.586679 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.586760 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c528ece8-d372-4479-b08e-cd5e12306def-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.586849 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c528ece8-d372-4479-b08e-cd5e12306def-logs\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.691769 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692307 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c528ece8-d372-4479-b08e-cd5e12306def-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692363 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692412 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692447 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692472 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c528ece8-d372-4479-b08e-cd5e12306def-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692492 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c528ece8-d372-4479-b08e-cd5e12306def-logs\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692533 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzs8h\" (UniqueName: \"kubernetes.io/projected/c528ece8-d372-4479-b08e-cd5e12306def-kube-api-access-lzs8h\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692557 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.692811 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c528ece8-d372-4479-b08e-cd5e12306def-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.695236 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.697263 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c528ece8-d372-4479-b08e-cd5e12306def-logs\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.698053 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.704779 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.704980 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.705389 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c528ece8-d372-4479-b08e-cd5e12306def-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.705507 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c528ece8-d372-4479-b08e-cd5e12306def-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.734082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.735175 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzs8h\" (UniqueName: \"kubernetes.io/projected/c528ece8-d372-4479-b08e-cd5e12306def-kube-api-access-lzs8h\") pod \"glance-default-internal-api-0\" (UID: \"c528ece8-d372-4479-b08e-cd5e12306def\") " pod="openstack/glance-default-internal-api-0" Feb 01 15:07:04 crc kubenswrapper[4820]: I0201 15:07:04.777033 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:05 crc kubenswrapper[4820]: I0201 15:07:05.223587 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25cdb207-36a8-4d05-aae6-e46de6f2dd09" path="/var/lib/kubelet/pods/25cdb207-36a8-4d05-aae6-e46de6f2dd09/volumes" Feb 01 15:07:05 crc kubenswrapper[4820]: I0201 15:07:05.380313 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"939be59c-3ae2-4a0e-ade1-c491cb03289e","Type":"ContainerStarted","Data":"1aff75821effa8f08e5b7662e5707f5a926c8cb867862cfd07a666f82ed9fee4"} Feb 01 15:07:05 crc kubenswrapper[4820]: I0201 15:07:05.419061 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.403981 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"939be59c-3ae2-4a0e-ade1-c491cb03289e","Type":"ContainerStarted","Data":"ff1a0ecb1d21e75383a0b731f3ede379aa9501bba9ce2a99111b743caea92b3b"} Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.443171 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.443135692 podStartE2EDuration="4.443135692s" podCreationTimestamp="2026-02-01 15:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:07:06.43016478 +0000 UTC m=+2767.950531074" watchObservedRunningTime="2026-02-01 15:07:06.443135692 +0000 UTC m=+2767.963501976" Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.476243 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.525211 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.924010 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-ptbpt"] Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.925954 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.928428 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-qmpj7" Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.935031 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 01 15:07:06 crc kubenswrapper[4820]: I0201 15:07:06.936142 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-ptbpt"] Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.059145 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss2hb\" (UniqueName: \"kubernetes.io/projected/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-kube-api-access-ss2hb\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.059269 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-job-config-data\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.059546 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-combined-ca-bundle\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.059725 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-config-data\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.162583 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-combined-ca-bundle\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.163162 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-config-data\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.163215 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss2hb\" (UniqueName: \"kubernetes.io/projected/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-kube-api-access-ss2hb\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.163297 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-job-config-data\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.171317 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-job-config-data\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.171478 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-config-data\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.179854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-combined-ca-bundle\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.180188 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss2hb\" (UniqueName: \"kubernetes.io/projected/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-kube-api-access-ss2hb\") pod \"manila-db-sync-ptbpt\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:07 crc kubenswrapper[4820]: I0201 15:07:07.277592 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:11 crc kubenswrapper[4820]: W0201 15:07:11.093435 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc528ece8_d372_4479_b08e_cd5e12306def.slice/crio-77fce9a28089c3fed49dd85e65a6f12c11a905197880eb5ccf80dbc0a477cd00 WatchSource:0}: Error finding container 77fce9a28089c3fed49dd85e65a6f12c11a905197880eb5ccf80dbc0a477cd00: Status 404 returned error can't find the container with id 77fce9a28089c3fed49dd85e65a6f12c11a905197880eb5ccf80dbc0a477cd00 Feb 01 15:07:11 crc kubenswrapper[4820]: I0201 15:07:11.466007 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c528ece8-d372-4479-b08e-cd5e12306def","Type":"ContainerStarted","Data":"77fce9a28089c3fed49dd85e65a6f12c11a905197880eb5ccf80dbc0a477cd00"} Feb 01 15:07:11 crc kubenswrapper[4820]: I0201 15:07:11.789808 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-ptbpt"] Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.482553 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8445bf989c-rxj4q" event={"ID":"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc","Type":"ContainerStarted","Data":"0ce877ee0ddc0d61c8039e2afa0d5a58810cc73c5c566e98cfc613e1a4694547"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.483543 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8445bf989c-rxj4q" event={"ID":"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc","Type":"ContainerStarted","Data":"39cbfbab8bd3e2d83341a15e9de387fd8f9642aa82775d9d72d4481b3ca5174a"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.483105 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8445bf989c-rxj4q" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerName="horizon" containerID="cri-o://0ce877ee0ddc0d61c8039e2afa0d5a58810cc73c5c566e98cfc613e1a4694547" gracePeriod=30 Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.482717 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8445bf989c-rxj4q" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerName="horizon-log" containerID="cri-o://39cbfbab8bd3e2d83341a15e9de387fd8f9642aa82775d9d72d4481b3ca5174a" gracePeriod=30 Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.493705 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c64959b6-498kr" event={"ID":"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18","Type":"ContainerStarted","Data":"dc07e96f87a3ae724d5b54090035066296c8599588dd3f920deadcd63b3cc19d"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.493767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c64959b6-498kr" event={"ID":"35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18","Type":"ContainerStarted","Data":"78b51d81abeb0e15d8c4b9bd85e0d4fcd8a735076f835945c5ddbd8dca4827d7"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.498767 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5897584f9c-s8gwr" event={"ID":"90ffe183-d2c6-4914-9c7b-faedde2e565a","Type":"ContainerStarted","Data":"4f21622ffca2e20231e1ba387568f74dae2166a9689989f864803ea1f7ef5110"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.498798 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5897584f9c-s8gwr" event={"ID":"90ffe183-d2c6-4914-9c7b-faedde2e565a","Type":"ContainerStarted","Data":"67cc13e343b3b5e9996ea8646950e46013ee327989d857eb6c2e633c4b5f5e81"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.498926 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5897584f9c-s8gwr" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerName="horizon-log" containerID="cri-o://67cc13e343b3b5e9996ea8646950e46013ee327989d857eb6c2e633c4b5f5e81" gracePeriod=30 Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.498950 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5897584f9c-s8gwr" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerName="horizon" containerID="cri-o://4f21622ffca2e20231e1ba387568f74dae2166a9689989f864803ea1f7ef5110" gracePeriod=30 Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.515206 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8445bf989c-rxj4q" podStartSLOduration=3.350238046 podStartE2EDuration="16.515187432s" podCreationTimestamp="2026-02-01 15:06:56 +0000 UTC" firstStartedPulling="2026-02-01 15:06:58.093652119 +0000 UTC m=+2759.614018403" lastFinishedPulling="2026-02-01 15:07:11.258601505 +0000 UTC m=+2772.778967789" observedRunningTime="2026-02-01 15:07:12.501334281 +0000 UTC m=+2774.021700565" watchObservedRunningTime="2026-02-01 15:07:12.515187432 +0000 UTC m=+2774.035553716" Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.524407 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c9fd7b88-c7gn4" event={"ID":"9551e678-9809-43e8-8ea2-33c7b873f076","Type":"ContainerStarted","Data":"e3d513fee4c22326779615924705d06357ff59f7e252b594a9eec22c366211bc"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.524496 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c9fd7b88-c7gn4" event={"ID":"9551e678-9809-43e8-8ea2-33c7b873f076","Type":"ContainerStarted","Data":"b28977f89c28d14b467c829ed188a961a6e13bc5d6afb6189bcb2b6659a2f028"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.533030 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ptbpt" event={"ID":"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052","Type":"ContainerStarted","Data":"4f250778a20449f81282ff3e9c75b302db1bcfe353131a6a2daacc6c60e0bd97"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.536152 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c528ece8-d372-4479-b08e-cd5e12306def","Type":"ContainerStarted","Data":"08dae46ebeb5fd9982962388bc095e30572635ca73ef7968cd3e5f3928dcd021"} Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.536637 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69c64959b6-498kr" podStartSLOduration=3.703711786 podStartE2EDuration="13.536616057s" podCreationTimestamp="2026-02-01 15:06:59 +0000 UTC" firstStartedPulling="2026-02-01 15:07:01.429285191 +0000 UTC m=+2762.949651475" lastFinishedPulling="2026-02-01 15:07:11.262189462 +0000 UTC m=+2772.782555746" observedRunningTime="2026-02-01 15:07:12.530599373 +0000 UTC m=+2774.050965657" watchObservedRunningTime="2026-02-01 15:07:12.536616057 +0000 UTC m=+2774.056982351" Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.565345 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5897584f9c-s8gwr" podStartSLOduration=3.161875629 podStartE2EDuration="16.565313225s" podCreationTimestamp="2026-02-01 15:06:56 +0000 UTC" firstStartedPulling="2026-02-01 15:06:57.863362186 +0000 UTC m=+2759.383728470" lastFinishedPulling="2026-02-01 15:07:11.266799782 +0000 UTC m=+2772.787166066" observedRunningTime="2026-02-01 15:07:12.562508418 +0000 UTC m=+2774.082874702" watchObservedRunningTime="2026-02-01 15:07:12.565313225 +0000 UTC m=+2774.085679529" Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.612261 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-79c9fd7b88-c7gn4" podStartSLOduration=3.805365726 podStartE2EDuration="13.612228811s" podCreationTimestamp="2026-02-01 15:06:59 +0000 UTC" firstStartedPulling="2026-02-01 15:07:01.453465701 +0000 UTC m=+2762.973831985" lastFinishedPulling="2026-02-01 15:07:11.260328776 +0000 UTC m=+2772.780695070" observedRunningTime="2026-02-01 15:07:12.602648991 +0000 UTC m=+2774.123015295" watchObservedRunningTime="2026-02-01 15:07:12.612228811 +0000 UTC m=+2774.132595115" Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.889438 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.889507 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.947935 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 01 15:07:12 crc kubenswrapper[4820]: I0201 15:07:12.977374 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 01 15:07:13 crc kubenswrapper[4820]: I0201 15:07:13.553431 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c528ece8-d372-4479-b08e-cd5e12306def","Type":"ContainerStarted","Data":"2566d9b7309313dfbd939c942f35862027bb716dec11ecc7ee7e41e90b6d77cf"} Feb 01 15:07:13 crc kubenswrapper[4820]: I0201 15:07:13.555219 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 01 15:07:13 crc kubenswrapper[4820]: I0201 15:07:13.555302 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 01 15:07:13 crc kubenswrapper[4820]: I0201 15:07:13.604672 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.604643783 podStartE2EDuration="9.604643783s" podCreationTimestamp="2026-02-01 15:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:07:13.582488801 +0000 UTC m=+2775.102855095" watchObservedRunningTime="2026-02-01 15:07:13.604643783 +0000 UTC m=+2775.125010077" Feb 01 15:07:14 crc kubenswrapper[4820]: I0201 15:07:14.778394 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:14 crc kubenswrapper[4820]: I0201 15:07:14.779278 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:14 crc kubenswrapper[4820]: I0201 15:07:14.823508 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:14 crc kubenswrapper[4820]: I0201 15:07:14.844486 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:15 crc kubenswrapper[4820]: I0201 15:07:15.572946 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:15 crc kubenswrapper[4820]: I0201 15:07:15.573010 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:16 crc kubenswrapper[4820]: I0201 15:07:16.972197 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.064158 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.637233 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k98x2"] Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.639049 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.658269 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k98x2"] Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.789429 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-catalog-content\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.789772 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-utilities\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.789906 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtjjc\" (UniqueName: \"kubernetes.io/projected/188a376a-f9ba-49e2-9747-467a340cc0f1-kube-api-access-rtjjc\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.892845 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-utilities\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.893015 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtjjc\" (UniqueName: \"kubernetes.io/projected/188a376a-f9ba-49e2-9747-467a340cc0f1-kube-api-access-rtjjc\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.893123 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-catalog-content\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.893420 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-utilities\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.893683 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-catalog-content\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.918981 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtjjc\" (UniqueName: \"kubernetes.io/projected/188a376a-f9ba-49e2-9747-467a340cc0f1-kube-api-access-rtjjc\") pod \"redhat-operators-k98x2\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:17 crc kubenswrapper[4820]: I0201 15:07:17.979017 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:19 crc kubenswrapper[4820]: I0201 15:07:19.575575 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 01 15:07:19 crc kubenswrapper[4820]: I0201 15:07:19.576184 4820 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 15:07:19 crc kubenswrapper[4820]: I0201 15:07:19.614888 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:19 crc kubenswrapper[4820]: I0201 15:07:19.626003 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 01 15:07:19 crc kubenswrapper[4820]: I0201 15:07:19.689261 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 01 15:07:20 crc kubenswrapper[4820]: I0201 15:07:20.140167 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:20 crc kubenswrapper[4820]: I0201 15:07:20.141905 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:20 crc kubenswrapper[4820]: I0201 15:07:20.263380 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:07:20 crc kubenswrapper[4820]: I0201 15:07:20.263743 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:07:20 crc kubenswrapper[4820]: I0201 15:07:20.751043 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k98x2"] Feb 01 15:07:21 crc kubenswrapper[4820]: I0201 15:07:21.642671 4820 generic.go:334] "Generic (PLEG): container finished" podID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerID="60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283" exitCode=0 Feb 01 15:07:21 crc kubenswrapper[4820]: I0201 15:07:21.643125 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k98x2" event={"ID":"188a376a-f9ba-49e2-9747-467a340cc0f1","Type":"ContainerDied","Data":"60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283"} Feb 01 15:07:21 crc kubenswrapper[4820]: I0201 15:07:21.643224 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k98x2" event={"ID":"188a376a-f9ba-49e2-9747-467a340cc0f1","Type":"ContainerStarted","Data":"74d7ee782d0ee9933e01685ca78ef71ad1ee02a740f6e85e2bab441d76076bbb"} Feb 01 15:07:21 crc kubenswrapper[4820]: I0201 15:07:21.649601 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ptbpt" event={"ID":"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052","Type":"ContainerStarted","Data":"4d29f0891610f6036af7cfcb6f99b67fb2175096a87160e5f794ac03e8827d19"} Feb 01 15:07:21 crc kubenswrapper[4820]: I0201 15:07:21.719404 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-ptbpt" podStartSLOduration=7.45279731 podStartE2EDuration="15.719382245s" podCreationTimestamp="2026-02-01 15:07:06 +0000 UTC" firstStartedPulling="2026-02-01 15:07:11.806485605 +0000 UTC m=+2773.326851889" lastFinishedPulling="2026-02-01 15:07:20.07307054 +0000 UTC m=+2781.593436824" observedRunningTime="2026-02-01 15:07:21.694367054 +0000 UTC m=+2783.214733348" watchObservedRunningTime="2026-02-01 15:07:21.719382245 +0000 UTC m=+2783.239748529" Feb 01 15:07:22 crc kubenswrapper[4820]: I0201 15:07:22.675445 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k98x2" event={"ID":"188a376a-f9ba-49e2-9747-467a340cc0f1","Type":"ContainerStarted","Data":"d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f"} Feb 01 15:07:25 crc kubenswrapper[4820]: I0201 15:07:25.715578 4820 generic.go:334] "Generic (PLEG): container finished" podID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerID="d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f" exitCode=0 Feb 01 15:07:25 crc kubenswrapper[4820]: I0201 15:07:25.715646 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k98x2" event={"ID":"188a376a-f9ba-49e2-9747-467a340cc0f1","Type":"ContainerDied","Data":"d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f"} Feb 01 15:07:26 crc kubenswrapper[4820]: I0201 15:07:26.733563 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k98x2" event={"ID":"188a376a-f9ba-49e2-9747-467a340cc0f1","Type":"ContainerStarted","Data":"b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7"} Feb 01 15:07:26 crc kubenswrapper[4820]: I0201 15:07:26.812391 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k98x2" podStartSLOduration=5.3088806139999996 podStartE2EDuration="9.812361744s" podCreationTimestamp="2026-02-01 15:07:17 +0000 UTC" firstStartedPulling="2026-02-01 15:07:21.645681758 +0000 UTC m=+2783.166048052" lastFinishedPulling="2026-02-01 15:07:26.149162878 +0000 UTC m=+2787.669529182" observedRunningTime="2026-02-01 15:07:26.810184532 +0000 UTC m=+2788.330550836" watchObservedRunningTime="2026-02-01 15:07:26.812361744 +0000 UTC m=+2788.332728018" Feb 01 15:07:27 crc kubenswrapper[4820]: I0201 15:07:27.980155 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:27 crc kubenswrapper[4820]: I0201 15:07:27.980568 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:29 crc kubenswrapper[4820]: I0201 15:07:29.039593 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k98x2" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="registry-server" probeResult="failure" output=< Feb 01 15:07:29 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 01 15:07:29 crc kubenswrapper[4820]: > Feb 01 15:07:30 crc kubenswrapper[4820]: I0201 15:07:30.139516 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69c64959b6-498kr" podUID="35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 01 15:07:30 crc kubenswrapper[4820]: I0201 15:07:30.264015 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-79c9fd7b88-c7gn4" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Feb 01 15:07:35 crc kubenswrapper[4820]: I0201 15:07:35.817626 4820 generic.go:334] "Generic (PLEG): container finished" podID="d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" containerID="4d29f0891610f6036af7cfcb6f99b67fb2175096a87160e5f794ac03e8827d19" exitCode=0 Feb 01 15:07:35 crc kubenswrapper[4820]: I0201 15:07:35.817755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ptbpt" event={"ID":"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052","Type":"ContainerDied","Data":"4d29f0891610f6036af7cfcb6f99b67fb2175096a87160e5f794ac03e8827d19"} Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.287507 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.456631 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss2hb\" (UniqueName: \"kubernetes.io/projected/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-kube-api-access-ss2hb\") pod \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.456773 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-job-config-data\") pod \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.456833 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-combined-ca-bundle\") pod \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.456921 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-config-data\") pod \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\" (UID: \"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052\") " Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.462617 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-kube-api-access-ss2hb" (OuterVolumeSpecName: "kube-api-access-ss2hb") pod "d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" (UID: "d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052"). InnerVolumeSpecName "kube-api-access-ss2hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.469064 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-config-data" (OuterVolumeSpecName: "config-data") pod "d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" (UID: "d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.473027 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" (UID: "d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.489329 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" (UID: "d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.559217 4820 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.559254 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.559266 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.559276 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss2hb\" (UniqueName: \"kubernetes.io/projected/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052-kube-api-access-ss2hb\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.865911 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ptbpt" event={"ID":"d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052","Type":"ContainerDied","Data":"4f250778a20449f81282ff3e9c75b302db1bcfe353131a6a2daacc6c60e0bd97"} Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.865972 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ptbpt" Feb 01 15:07:37 crc kubenswrapper[4820]: I0201 15:07:37.865985 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f250778a20449f81282ff3e9c75b302db1bcfe353131a6a2daacc6c60e0bd97" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.042178 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.121529 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.224277 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:07:38 crc kubenswrapper[4820]: E0201 15:07:38.224979 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" containerName="manila-db-sync" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.224998 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" containerName="manila-db-sync" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.225203 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" containerName="manila-db-sync" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.226123 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.230049 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.230416 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.230943 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.231030 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-qmpj7" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.256343 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.307379 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-ll64p"] Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.309037 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.334488 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k98x2"] Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.348959 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-ll64p"] Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.358767 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.360534 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.365212 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.374666 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8mn\" (UniqueName: \"kubernetes.io/projected/f71b0669-a307-45e5-8950-a81d88db9cac-kube-api-access-xj8mn\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.374706 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f71b0669-a307-45e5-8950-a81d88db9cac-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.374738 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.374759 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.374785 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.374927 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-scripts\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.401518 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476343 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476413 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-scripts\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcvr\" (UniqueName: \"kubernetes.io/projected/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-kube-api-access-rxcvr\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476482 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476522 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-scripts\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476575 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476602 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99kfq\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-kube-api-access-99kfq\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476630 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476654 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8mn\" (UniqueName: \"kubernetes.io/projected/f71b0669-a307-45e5-8950-a81d88db9cac-kube-api-access-xj8mn\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476682 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-config\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476714 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f71b0669-a307-45e5-8950-a81d88db9cac-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476739 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476757 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476776 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-ceph\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476792 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476810 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476836 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476858 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.476977 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.477004 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.477354 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f71b0669-a307-45e5-8950-a81d88db9cac-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.481774 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.483580 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-scripts\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.485535 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.494898 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.508903 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.510005 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8mn\" (UniqueName: \"kubernetes.io/projected/f71b0669-a307-45e5-8950-a81d88db9cac-kube-api-access-xj8mn\") pod \"manila-scheduler-0\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.510494 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.513356 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.518993 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.546660 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.578832 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.578909 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-ceph\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.578932 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.578970 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579000 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579068 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579115 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-scripts\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579142 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcvr\" (UniqueName: \"kubernetes.io/projected/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-kube-api-access-rxcvr\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579165 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579200 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99kfq\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-kube-api-access-99kfq\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579222 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579253 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.579272 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-config\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.580648 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.580831 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.581300 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.581601 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.581691 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.582128 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.582520 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-config\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.591294 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-scripts\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.591819 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.597914 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-ceph\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.597933 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.598322 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.601562 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99kfq\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-kube-api-access-99kfq\") pod \"manila-share-share1-0\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.603073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcvr\" (UniqueName: \"kubernetes.io/projected/9ada9dd3-3a44-4050-8e20-76afe7a02f4c-kube-api-access-rxcvr\") pod \"dnsmasq-dns-69655fd4bf-ll64p\" (UID: \"9ada9dd3-3a44-4050-8e20-76afe7a02f4c\") " pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.630527 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.681042 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-etc-machine-id\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.681289 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data-custom\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.681308 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwttt\" (UniqueName: \"kubernetes.io/projected/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-kube-api-access-wwttt\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.681569 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.681624 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.681719 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-scripts\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.682144 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-logs\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.690762 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.784818 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-logs\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.784917 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-etc-machine-id\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.784943 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data-custom\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.784962 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwttt\" (UniqueName: \"kubernetes.io/projected/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-kube-api-access-wwttt\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.785002 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.785022 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.785050 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-scripts\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.785288 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-logs\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.785637 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-etc-machine-id\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.789952 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.790456 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.796020 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data-custom\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.796901 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-scripts\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:38 crc kubenswrapper[4820]: I0201 15:07:38.803288 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwttt\" (UniqueName: \"kubernetes.io/projected/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-kube-api-access-wwttt\") pod \"manila-api-0\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " pod="openstack/manila-api-0" Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.026268 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.051085 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:07:39 crc kubenswrapper[4820]: W0201 15:07:39.083609 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf71b0669_a307_45e5_8950_a81d88db9cac.slice/crio-c86ff21ef1d261b1048cf9177ca7a6958ece16cc6da522926002e85d0522221a WatchSource:0}: Error finding container c86ff21ef1d261b1048cf9177ca7a6958ece16cc6da522926002e85d0522221a: Status 404 returned error can't find the container with id c86ff21ef1d261b1048cf9177ca7a6958ece16cc6da522926002e85d0522221a Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.190527 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-ll64p"] Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.273786 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.723330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:39 crc kubenswrapper[4820]: W0201 15:07:39.750789 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5e26fd7_0a9d_4d5a_b6ef_996ac2c0ff8e.slice/crio-f004be6ccd8ae392950785d21aa11af3c090dc84eb1fff745de111de1657591d WatchSource:0}: Error finding container f004be6ccd8ae392950785d21aa11af3c090dc84eb1fff745de111de1657591d: Status 404 returned error can't find the container with id f004be6ccd8ae392950785d21aa11af3c090dc84eb1fff745de111de1657591d Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.906369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ef4f913d-b12b-4cf0-af89-7b289df9ceed","Type":"ContainerStarted","Data":"75c8218ef2c079300f69d5d14bd59c62c32f9e1907124a729c0c0bbd6fa1d79e"} Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.908579 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e","Type":"ContainerStarted","Data":"f004be6ccd8ae392950785d21aa11af3c090dc84eb1fff745de111de1657591d"} Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.921031 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f71b0669-a307-45e5-8950-a81d88db9cac","Type":"ContainerStarted","Data":"c86ff21ef1d261b1048cf9177ca7a6958ece16cc6da522926002e85d0522221a"} Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.933755 4820 generic.go:334] "Generic (PLEG): container finished" podID="9ada9dd3-3a44-4050-8e20-76afe7a02f4c" containerID="7ca2600e9741981b55f197e9e33ffd883532b95d1d5c7f0f1d2735f56ec18431" exitCode=0 Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.933840 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" event={"ID":"9ada9dd3-3a44-4050-8e20-76afe7a02f4c","Type":"ContainerDied","Data":"7ca2600e9741981b55f197e9e33ffd883532b95d1d5c7f0f1d2735f56ec18431"} Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.933913 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" event={"ID":"9ada9dd3-3a44-4050-8e20-76afe7a02f4c","Type":"ContainerStarted","Data":"ef124ec543973f87bd4682ae9e15e63ba41489e792dc84cace04e87fc0b9090b"} Feb 01 15:07:39 crc kubenswrapper[4820]: I0201 15:07:39.934032 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k98x2" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="registry-server" containerID="cri-o://b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7" gracePeriod=2 Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.474209 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.550483 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-catalog-content\") pod \"188a376a-f9ba-49e2-9747-467a340cc0f1\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.559085 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-utilities\") pod \"188a376a-f9ba-49e2-9747-467a340cc0f1\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.559287 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtjjc\" (UniqueName: \"kubernetes.io/projected/188a376a-f9ba-49e2-9747-467a340cc0f1-kube-api-access-rtjjc\") pod \"188a376a-f9ba-49e2-9747-467a340cc0f1\" (UID: \"188a376a-f9ba-49e2-9747-467a340cc0f1\") " Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.560130 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-utilities" (OuterVolumeSpecName: "utilities") pod "188a376a-f9ba-49e2-9747-467a340cc0f1" (UID: "188a376a-f9ba-49e2-9747-467a340cc0f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.578105 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188a376a-f9ba-49e2-9747-467a340cc0f1-kube-api-access-rtjjc" (OuterVolumeSpecName: "kube-api-access-rtjjc") pod "188a376a-f9ba-49e2-9747-467a340cc0f1" (UID: "188a376a-f9ba-49e2-9747-467a340cc0f1"). InnerVolumeSpecName "kube-api-access-rtjjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.660986 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtjjc\" (UniqueName: \"kubernetes.io/projected/188a376a-f9ba-49e2-9747-467a340cc0f1-kube-api-access-rtjjc\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.661016 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.718588 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "188a376a-f9ba-49e2-9747-467a340cc0f1" (UID: "188a376a-f9ba-49e2-9747-467a340cc0f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.762809 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/188a376a-f9ba-49e2-9747-467a340cc0f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.963576 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" event={"ID":"9ada9dd3-3a44-4050-8e20-76afe7a02f4c","Type":"ContainerStarted","Data":"a8b998b4218ef838224bd673ff7a72491b06c144755dd6d6d58b5bda3167a03e"} Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.964201 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.983136 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e","Type":"ContainerStarted","Data":"d00b16f47861163f91157a96babf633788df1e5ba7403087ae746d140a398d75"} Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.990219 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" podStartSLOduration=2.990192804 podStartE2EDuration="2.990192804s" podCreationTimestamp="2026-02-01 15:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:07:40.989089687 +0000 UTC m=+2802.509455971" watchObservedRunningTime="2026-02-01 15:07:40.990192804 +0000 UTC m=+2802.510559088" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.990279 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f71b0669-a307-45e5-8950-a81d88db9cac","Type":"ContainerStarted","Data":"1d6bd23d31dbc2ce207cd0f992ee89c2c2b8e79421952cbe59a5ee01d70fab36"} Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.995345 4820 generic.go:334] "Generic (PLEG): container finished" podID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerID="b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7" exitCode=0 Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.995413 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k98x2" Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.995411 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k98x2" event={"ID":"188a376a-f9ba-49e2-9747-467a340cc0f1","Type":"ContainerDied","Data":"b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7"} Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.999016 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k98x2" event={"ID":"188a376a-f9ba-49e2-9747-467a340cc0f1","Type":"ContainerDied","Data":"74d7ee782d0ee9933e01685ca78ef71ad1ee02a740f6e85e2bab441d76076bbb"} Feb 01 15:07:40 crc kubenswrapper[4820]: I0201 15:07:40.999063 4820 scope.go:117] "RemoveContainer" containerID="b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.089923 4820 scope.go:117] "RemoveContainer" containerID="d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.095713 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k98x2"] Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.121958 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k98x2"] Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.129611 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.152048 4820 scope.go:117] "RemoveContainer" containerID="60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.218107 4820 scope.go:117] "RemoveContainer" containerID="b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.225106 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" path="/var/lib/kubelet/pods/188a376a-f9ba-49e2-9747-467a340cc0f1/volumes" Feb 01 15:07:41 crc kubenswrapper[4820]: E0201 15:07:41.232114 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7\": container with ID starting with b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7 not found: ID does not exist" containerID="b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.232174 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7"} err="failed to get container status \"b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7\": rpc error: code = NotFound desc = could not find container \"b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7\": container with ID starting with b7d55543d1016d797ed7487714d436fdfc2e16c48aa00a5a2d6f877fa5ba3ba7 not found: ID does not exist" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.232213 4820 scope.go:117] "RemoveContainer" containerID="d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f" Feb 01 15:07:41 crc kubenswrapper[4820]: E0201 15:07:41.233986 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f\": container with ID starting with d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f not found: ID does not exist" containerID="d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.234017 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f"} err="failed to get container status \"d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f\": rpc error: code = NotFound desc = could not find container \"d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f\": container with ID starting with d4bd67c37c7aead9e402ce1c3de12605eef789b87a3f89b99ff99418d0758c5f not found: ID does not exist" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.234036 4820 scope.go:117] "RemoveContainer" containerID="60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283" Feb 01 15:07:41 crc kubenswrapper[4820]: E0201 15:07:41.234557 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283\": container with ID starting with 60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283 not found: ID does not exist" containerID="60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283" Feb 01 15:07:41 crc kubenswrapper[4820]: I0201 15:07:41.234605 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283"} err="failed to get container status \"60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283\": rpc error: code = NotFound desc = could not find container \"60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283\": container with ID starting with 60670d74e6caa184a66d6d02f74fed53ab75e93d626ee2ef0db47c73f24db283 not found: ID does not exist" Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.020148 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e","Type":"ContainerStarted","Data":"9dca7ae4040c040f0dc0a8199b59471f7071a7ff95a28cfd4a5f8f2ebbf27160"} Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.020246 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerName="manila-api-log" containerID="cri-o://d00b16f47861163f91157a96babf633788df1e5ba7403087ae746d140a398d75" gracePeriod=30 Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.020338 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerName="manila-api" containerID="cri-o://9dca7ae4040c040f0dc0a8199b59471f7071a7ff95a28cfd4a5f8f2ebbf27160" gracePeriod=30 Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.020737 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.031665 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f71b0669-a307-45e5-8950-a81d88db9cac","Type":"ContainerStarted","Data":"c351321aefcc3e3c1f60c56ccf42a8deee63cb200083f647d0e6701c38d250d7"} Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.083400 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.424709425 podStartE2EDuration="4.083375413s" podCreationTimestamp="2026-02-01 15:07:38 +0000 UTC" firstStartedPulling="2026-02-01 15:07:39.096418073 +0000 UTC m=+2800.616784357" lastFinishedPulling="2026-02-01 15:07:39.755084061 +0000 UTC m=+2801.275450345" observedRunningTime="2026-02-01 15:07:42.071283222 +0000 UTC m=+2803.591649526" watchObservedRunningTime="2026-02-01 15:07:42.083375413 +0000 UTC m=+2803.603741697" Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.085039 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.085031222 podStartE2EDuration="4.085031222s" podCreationTimestamp="2026-02-01 15:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:07:42.048801073 +0000 UTC m=+2803.569167377" watchObservedRunningTime="2026-02-01 15:07:42.085031222 +0000 UTC m=+2803.605397506" Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.486169 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:42 crc kubenswrapper[4820]: I0201 15:07:42.499166 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.050089 4820 generic.go:334] "Generic (PLEG): container finished" podID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerID="9dca7ae4040c040f0dc0a8199b59471f7071a7ff95a28cfd4a5f8f2ebbf27160" exitCode=0 Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.050457 4820 generic.go:334] "Generic (PLEG): container finished" podID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerID="d00b16f47861163f91157a96babf633788df1e5ba7403087ae746d140a398d75" exitCode=143 Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.050176 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e","Type":"ContainerDied","Data":"9dca7ae4040c040f0dc0a8199b59471f7071a7ff95a28cfd4a5f8f2ebbf27160"} Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.050552 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e","Type":"ContainerDied","Data":"d00b16f47861163f91157a96babf633788df1e5ba7403087ae746d140a398d75"} Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.055004 4820 generic.go:334] "Generic (PLEG): container finished" podID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerID="0ce877ee0ddc0d61c8039e2afa0d5a58810cc73c5c566e98cfc613e1a4694547" exitCode=137 Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.055049 4820 generic.go:334] "Generic (PLEG): container finished" podID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerID="39cbfbab8bd3e2d83341a15e9de387fd8f9642aa82775d9d72d4481b3ca5174a" exitCode=137 Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.055106 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8445bf989c-rxj4q" event={"ID":"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc","Type":"ContainerDied","Data":"0ce877ee0ddc0d61c8039e2afa0d5a58810cc73c5c566e98cfc613e1a4694547"} Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.055145 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8445bf989c-rxj4q" event={"ID":"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc","Type":"ContainerDied","Data":"39cbfbab8bd3e2d83341a15e9de387fd8f9642aa82775d9d72d4481b3ca5174a"} Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.060530 4820 generic.go:334] "Generic (PLEG): container finished" podID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerID="4f21622ffca2e20231e1ba387568f74dae2166a9689989f864803ea1f7ef5110" exitCode=137 Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.060558 4820 generic.go:334] "Generic (PLEG): container finished" podID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerID="67cc13e343b3b5e9996ea8646950e46013ee327989d857eb6c2e633c4b5f5e81" exitCode=137 Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.065755 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5897584f9c-s8gwr" event={"ID":"90ffe183-d2c6-4914-9c7b-faedde2e565a","Type":"ContainerDied","Data":"4f21622ffca2e20231e1ba387568f74dae2166a9689989f864803ea1f7ef5110"} Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.065820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5897584f9c-s8gwr" event={"ID":"90ffe183-d2c6-4914-9c7b-faedde2e565a","Type":"ContainerDied","Data":"67cc13e343b3b5e9996ea8646950e46013ee327989d857eb6c2e633c4b5f5e81"} Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.080613 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.244973 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90ffe183-d2c6-4914-9c7b-faedde2e565a-logs\") pod \"90ffe183-d2c6-4914-9c7b-faedde2e565a\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.245124 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-config-data\") pod \"90ffe183-d2c6-4914-9c7b-faedde2e565a\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.245321 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdkwd\" (UniqueName: \"kubernetes.io/projected/90ffe183-d2c6-4914-9c7b-faedde2e565a-kube-api-access-wdkwd\") pod \"90ffe183-d2c6-4914-9c7b-faedde2e565a\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.245418 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-scripts\") pod \"90ffe183-d2c6-4914-9c7b-faedde2e565a\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.245472 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90ffe183-d2c6-4914-9c7b-faedde2e565a-horizon-secret-key\") pod \"90ffe183-d2c6-4914-9c7b-faedde2e565a\" (UID: \"90ffe183-d2c6-4914-9c7b-faedde2e565a\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.246402 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ffe183-d2c6-4914-9c7b-faedde2e565a-logs" (OuterVolumeSpecName: "logs") pod "90ffe183-d2c6-4914-9c7b-faedde2e565a" (UID: "90ffe183-d2c6-4914-9c7b-faedde2e565a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.254269 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ffe183-d2c6-4914-9c7b-faedde2e565a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "90ffe183-d2c6-4914-9c7b-faedde2e565a" (UID: "90ffe183-d2c6-4914-9c7b-faedde2e565a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.271955 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ffe183-d2c6-4914-9c7b-faedde2e565a-kube-api-access-wdkwd" (OuterVolumeSpecName: "kube-api-access-wdkwd") pod "90ffe183-d2c6-4914-9c7b-faedde2e565a" (UID: "90ffe183-d2c6-4914-9c7b-faedde2e565a"). InnerVolumeSpecName "kube-api-access-wdkwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.285813 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-config-data" (OuterVolumeSpecName: "config-data") pod "90ffe183-d2c6-4914-9c7b-faedde2e565a" (UID: "90ffe183-d2c6-4914-9c7b-faedde2e565a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.306517 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-scripts" (OuterVolumeSpecName: "scripts") pod "90ffe183-d2c6-4914-9c7b-faedde2e565a" (UID: "90ffe183-d2c6-4914-9c7b-faedde2e565a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.348542 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdkwd\" (UniqueName: \"kubernetes.io/projected/90ffe183-d2c6-4914-9c7b-faedde2e565a-kube-api-access-wdkwd\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.348573 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.348583 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90ffe183-d2c6-4914-9c7b-faedde2e565a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.348591 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90ffe183-d2c6-4914-9c7b-faedde2e565a-logs\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.348602 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90ffe183-d2c6-4914-9c7b-faedde2e565a-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.556974 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.564453 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.757705 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwttt\" (UniqueName: \"kubernetes.io/projected/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-kube-api-access-wwttt\") pod \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.757759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-logs\") pod \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.757829 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data-custom\") pod \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.757925 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2lf9\" (UniqueName: \"kubernetes.io/projected/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-kube-api-access-f2lf9\") pod \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.757975 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-horizon-secret-key\") pod \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.758001 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-scripts\") pod \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.758066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-etc-machine-id\") pod \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.758082 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-logs\") pod \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.758107 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-scripts\") pod \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.758147 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data\") pod \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.758177 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-config-data\") pod \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\" (UID: \"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.758214 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-combined-ca-bundle\") pod \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\" (UID: \"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e\") " Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.759403 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-logs" (OuterVolumeSpecName: "logs") pod "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" (UID: "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.759670 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" (UID: "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.759918 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-logs" (OuterVolumeSpecName: "logs") pod "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" (UID: "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.773105 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-kube-api-access-wwttt" (OuterVolumeSpecName: "kube-api-access-wwttt") pod "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" (UID: "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e"). InnerVolumeSpecName "kube-api-access-wwttt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.773310 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" (UID: "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.773319 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" (UID: "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.773404 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-kube-api-access-f2lf9" (OuterVolumeSpecName: "kube-api-access-f2lf9") pod "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" (UID: "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc"). InnerVolumeSpecName "kube-api-access-f2lf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.773420 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-scripts" (OuterVolumeSpecName: "scripts") pod "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" (UID: "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.793959 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-config-data" (OuterVolumeSpecName: "config-data") pod "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" (UID: "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.800532 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-scripts" (OuterVolumeSpecName: "scripts") pod "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" (UID: "cdfcaeab-5a54-4f15-9e69-f5712b23d6fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.815332 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" (UID: "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.825719 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data" (OuterVolumeSpecName: "config-data") pod "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" (UID: "a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860774 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860809 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2lf9\" (UniqueName: \"kubernetes.io/projected/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-kube-api-access-f2lf9\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860820 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860831 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860840 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860849 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-logs\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860856 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860864 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860890 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860899 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860907 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwttt\" (UniqueName: \"kubernetes.io/projected/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-kube-api-access-wwttt\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:43 crc kubenswrapper[4820]: I0201 15:07:43.860916 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e-logs\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.074793 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e","Type":"ContainerDied","Data":"f004be6ccd8ae392950785d21aa11af3c090dc84eb1fff745de111de1657591d"} Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.074851 4820 scope.go:117] "RemoveContainer" containerID="9dca7ae4040c040f0dc0a8199b59471f7071a7ff95a28cfd4a5f8f2ebbf27160" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.074971 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.082239 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8445bf989c-rxj4q" event={"ID":"cdfcaeab-5a54-4f15-9e69-f5712b23d6fc","Type":"ContainerDied","Data":"a1cd13aacaee1cf1f1997f266cbacf860f8e83f4738c3d58ea3d08f7478a23f6"} Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.082334 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8445bf989c-rxj4q" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.086408 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5897584f9c-s8gwr" event={"ID":"90ffe183-d2c6-4914-9c7b-faedde2e565a","Type":"ContainerDied","Data":"639fc652b76e01ec30121dc6bfbd7f6db6a922d791a8ba00c600f520a8580e22"} Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.086666 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5897584f9c-s8gwr" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.117374 4820 scope.go:117] "RemoveContainer" containerID="d00b16f47861163f91157a96babf633788df1e5ba7403087ae746d140a398d75" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.117767 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.140444 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.167286 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5897584f9c-s8gwr"] Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.179613 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5897584f9c-s8gwr"] Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.191266 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.191813 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="registry-server" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.191832 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="registry-server" Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.191845 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerName="manila-api" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.191851 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerName="manila-api" Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.191862 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerName="horizon-log" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.191907 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerName="horizon-log" Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.191930 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="extract-content" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.191936 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="extract-content" Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.191950 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerName="horizon-log" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.191958 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerName="horizon-log" Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.191970 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerName="manila-api-log" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.191976 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerName="manila-api-log" Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.191986 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="extract-utilities" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.191993 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="extract-utilities" Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.192006 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerName="horizon" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192013 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerName="horizon" Feb 01 15:07:44 crc kubenswrapper[4820]: E0201 15:07:44.192023 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerName="horizon" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192031 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerName="horizon" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192201 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerName="manila-api-log" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192212 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" containerName="manila-api" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192232 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="188a376a-f9ba-49e2-9747-467a340cc0f1" containerName="registry-server" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192241 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerName="horizon" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192250 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerName="horizon-log" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192261 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" containerName="horizon-log" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.192268 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" containerName="horizon" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.193320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.199014 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.199201 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.199235 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.203390 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.216910 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8445bf989c-rxj4q"] Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.230699 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8445bf989c-rxj4q"] Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.252080 4820 scope.go:117] "RemoveContainer" containerID="0ce877ee0ddc0d61c8039e2afa0d5a58810cc73c5c566e98cfc613e1a4694547" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.374183 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzmgz\" (UniqueName: \"kubernetes.io/projected/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-kube-api-access-nzmgz\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.374235 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-config-data-custom\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.375177 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-etc-machine-id\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.375235 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.375273 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-scripts\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.375365 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-config-data\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.375494 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-internal-tls-certs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.375521 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-public-tls-certs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.375805 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-logs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.482263 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-config-data\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.482435 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-internal-tls-certs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.482464 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-public-tls-certs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.482744 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-logs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.482863 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzmgz\" (UniqueName: \"kubernetes.io/projected/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-kube-api-access-nzmgz\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.482930 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-config-data-custom\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.483109 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-etc-machine-id\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.483147 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.483173 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-scripts\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.483391 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-etc-machine-id\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.489073 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-config-data\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.489818 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-internal-tls-certs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.503069 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzmgz\" (UniqueName: \"kubernetes.io/projected/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-kube-api-access-nzmgz\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.521410 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-logs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.531538 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-scripts\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.532611 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.533315 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-public-tls-certs\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.535094 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485-config-data-custom\") pod \"manila-api-0\" (UID: \"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485\") " pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.571137 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69c64959b6-498kr" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.607475 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.707416 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79c9fd7b88-c7gn4"] Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.711826 4820 scope.go:117] "RemoveContainer" containerID="39cbfbab8bd3e2d83341a15e9de387fd8f9642aa82775d9d72d4481b3ca5174a" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.746417 4820 scope.go:117] "RemoveContainer" containerID="4f21622ffca2e20231e1ba387568f74dae2166a9689989f864803ea1f7ef5110" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.815137 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 01 15:07:44 crc kubenswrapper[4820]: I0201 15:07:44.959094 4820 scope.go:117] "RemoveContainer" containerID="67cc13e343b3b5e9996ea8646950e46013ee327989d857eb6c2e633c4b5f5e81" Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.124307 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79c9fd7b88-c7gn4" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon-log" containerID="cri-o://b28977f89c28d14b467c829ed188a961a6e13bc5d6afb6189bcb2b6659a2f028" gracePeriod=30 Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.124834 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79c9fd7b88-c7gn4" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon" containerID="cri-o://e3d513fee4c22326779615924705d06357ff59f7e252b594a9eec22c366211bc" gracePeriod=30 Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.209836 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ffe183-d2c6-4914-9c7b-faedde2e565a" path="/var/lib/kubelet/pods/90ffe183-d2c6-4914-9c7b-faedde2e565a/volumes" Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.211117 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e" path="/var/lib/kubelet/pods/a5e26fd7-0a9d-4d5a-b6ef-996ac2c0ff8e/volumes" Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.211981 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdfcaeab-5a54-4f15-9e69-f5712b23d6fc" path="/var/lib/kubelet/pods/cdfcaeab-5a54-4f15-9e69-f5712b23d6fc/volumes" Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.377934 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.378260 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="ceilometer-central-agent" containerID="cri-o://4b379e07fe57f063361a1ce621aa4a84c0e98687f48d8b08a4f1a9f37a91348f" gracePeriod=30 Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.378281 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="proxy-httpd" containerID="cri-o://de55a0e41edf9855c9c4ab441b97d105ed9be3b272d8f2a729bd53f87631c592" gracePeriod=30 Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.378360 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="sg-core" containerID="cri-o://baa6707e177e6515b30b164bfaaf7a0ec0d7d8f51bd4b5f6f8a26b93062a8f75" gracePeriod=30 Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.378391 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="ceilometer-notification-agent" containerID="cri-o://90558a52ac83d67af670de478e94d8575dbd0d29f9c3d41de8632a79fcf19341" gracePeriod=30 Feb 01 15:07:45 crc kubenswrapper[4820]: I0201 15:07:45.509749 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 01 15:07:46 crc kubenswrapper[4820]: I0201 15:07:46.142749 4820 generic.go:334] "Generic (PLEG): container finished" podID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerID="de55a0e41edf9855c9c4ab441b97d105ed9be3b272d8f2a729bd53f87631c592" exitCode=0 Feb 01 15:07:46 crc kubenswrapper[4820]: I0201 15:07:46.143070 4820 generic.go:334] "Generic (PLEG): container finished" podID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerID="baa6707e177e6515b30b164bfaaf7a0ec0d7d8f51bd4b5f6f8a26b93062a8f75" exitCode=2 Feb 01 15:07:46 crc kubenswrapper[4820]: I0201 15:07:46.143083 4820 generic.go:334] "Generic (PLEG): container finished" podID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerID="4b379e07fe57f063361a1ce621aa4a84c0e98687f48d8b08a4f1a9f37a91348f" exitCode=0 Feb 01 15:07:46 crc kubenswrapper[4820]: I0201 15:07:46.143105 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerDied","Data":"de55a0e41edf9855c9c4ab441b97d105ed9be3b272d8f2a729bd53f87631c592"} Feb 01 15:07:46 crc kubenswrapper[4820]: I0201 15:07:46.143132 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerDied","Data":"baa6707e177e6515b30b164bfaaf7a0ec0d7d8f51bd4b5f6f8a26b93062a8f75"} Feb 01 15:07:46 crc kubenswrapper[4820]: I0201 15:07:46.143142 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerDied","Data":"4b379e07fe57f063361a1ce621aa4a84c0e98687f48d8b08a4f1a9f37a91348f"} Feb 01 15:07:48 crc kubenswrapper[4820]: I0201 15:07:48.171655 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485","Type":"ContainerStarted","Data":"3d5d7dab589923f9b7a2f3fc061ee3313d4efc27724cf7f3c62aeaed6b42fa5c"} Feb 01 15:07:48 crc kubenswrapper[4820]: I0201 15:07:48.547169 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 01 15:07:48 crc kubenswrapper[4820]: I0201 15:07:48.633215 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-ll64p" Feb 01 15:07:48 crc kubenswrapper[4820]: I0201 15:07:48.726076 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-2f2ml"] Feb 01 15:07:48 crc kubenswrapper[4820]: I0201 15:07:48.726343 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" podUID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" containerName="dnsmasq-dns" containerID="cri-o://bb61a2a412cc6566a6a86bd1d242c071235ea2a86c5269179e1ca27cd9176182" gracePeriod=10 Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.187841 4820 generic.go:334] "Generic (PLEG): container finished" podID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" containerID="bb61a2a412cc6566a6a86bd1d242c071235ea2a86c5269179e1ca27cd9176182" exitCode=0 Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.188268 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" event={"ID":"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca","Type":"ContainerDied","Data":"bb61a2a412cc6566a6a86bd1d242c071235ea2a86c5269179e1ca27cd9176182"} Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.196424 4820 generic.go:334] "Generic (PLEG): container finished" podID="9551e678-9809-43e8-8ea2-33c7b873f076" containerID="e3d513fee4c22326779615924705d06357ff59f7e252b594a9eec22c366211bc" exitCode=0 Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.196506 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c9fd7b88-c7gn4" event={"ID":"9551e678-9809-43e8-8ea2-33c7b873f076","Type":"ContainerDied","Data":"e3d513fee4c22326779615924705d06357ff59f7e252b594a9eec22c366211bc"} Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.216676 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485","Type":"ContainerStarted","Data":"78a00e8603e7c51e2229f2b01612bf06b12b6023825648b8f0d4930ca1cc168e"} Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.216719 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485","Type":"ContainerStarted","Data":"314fa5a799f47464d495471b0ba37d928e8740d533c61ba93a052756eecba0d4"} Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.216729 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ef4f913d-b12b-4cf0-af89-7b289df9ceed","Type":"ContainerStarted","Data":"3dc9541cf2fe28573a7812a929dd59b918036af87c058a71a24b3e11ebd7a793"} Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.216739 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ef4f913d-b12b-4cf0-af89-7b289df9ceed","Type":"ContainerStarted","Data":"9b3674160ff59a2438bf2253de9e3b9d65a669fad0f86dc3791d1478efd58758"} Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.276547 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.276530072 podStartE2EDuration="5.276530072s" podCreationTimestamp="2026-02-01 15:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:07:49.275925958 +0000 UTC m=+2810.796292242" watchObservedRunningTime="2026-02-01 15:07:49.276530072 +0000 UTC m=+2810.796896356" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.306611 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.653689662 podStartE2EDuration="11.306589932s" podCreationTimestamp="2026-02-01 15:07:38 +0000 UTC" firstStartedPulling="2026-02-01 15:07:39.219993197 +0000 UTC m=+2800.740359491" lastFinishedPulling="2026-02-01 15:07:47.872893477 +0000 UTC m=+2809.393259761" observedRunningTime="2026-02-01 15:07:49.301924581 +0000 UTC m=+2810.822290865" watchObservedRunningTime="2026-02-01 15:07:49.306589932 +0000 UTC m=+2810.826956216" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.327195 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.482733 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-openstack-edpm-ipam\") pod \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.483009 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-dns-svc\") pod \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.483034 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td87b\" (UniqueName: \"kubernetes.io/projected/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-kube-api-access-td87b\") pod \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.483080 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-sb\") pod \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.483107 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-nb\") pod \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.483145 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-config\") pod \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\" (UID: \"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca\") " Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.494507 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-kube-api-access-td87b" (OuterVolumeSpecName: "kube-api-access-td87b") pod "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" (UID: "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca"). InnerVolumeSpecName "kube-api-access-td87b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.545169 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" (UID: "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.550859 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" (UID: "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.551866 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-config" (OuterVolumeSpecName: "config") pod "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" (UID: "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.562630 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" (UID: "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.578353 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" (UID: "4122afb2-67c9-4360-b5d0-72ab7b8bc7ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.585515 4820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-config\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.585770 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.585910 4820 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.586016 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td87b\" (UniqueName: \"kubernetes.io/projected/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-kube-api-access-td87b\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.586117 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:49 crc kubenswrapper[4820]: I0201 15:07:49.586221 4820 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:50 crc kubenswrapper[4820]: I0201 15:07:50.221241 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" Feb 01 15:07:50 crc kubenswrapper[4820]: I0201 15:07:50.221484 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-2f2ml" event={"ID":"4122afb2-67c9-4360-b5d0-72ab7b8bc7ca","Type":"ContainerDied","Data":"61c277d84ce6185675862194cd952b584d681456607e9ff2e9abe7277a9fb22f"} Feb 01 15:07:50 crc kubenswrapper[4820]: I0201 15:07:50.221725 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 01 15:07:50 crc kubenswrapper[4820]: I0201 15:07:50.221759 4820 scope.go:117] "RemoveContainer" containerID="bb61a2a412cc6566a6a86bd1d242c071235ea2a86c5269179e1ca27cd9176182" Feb 01 15:07:50 crc kubenswrapper[4820]: I0201 15:07:50.265575 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79c9fd7b88-c7gn4" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Feb 01 15:07:50 crc kubenswrapper[4820]: I0201 15:07:50.272151 4820 scope.go:117] "RemoveContainer" containerID="d480f3d9776484a4e4da498d3269efb7a9f4057630e34cbdf3fdf563a745444d" Feb 01 15:07:50 crc kubenswrapper[4820]: I0201 15:07:50.274541 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-2f2ml"] Feb 01 15:07:50 crc kubenswrapper[4820]: I0201 15:07:50.293003 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-2f2ml"] Feb 01 15:07:51 crc kubenswrapper[4820]: I0201 15:07:51.238013 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" path="/var/lib/kubelet/pods/4122afb2-67c9-4360-b5d0-72ab7b8bc7ca/volumes" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.261057 4820 generic.go:334] "Generic (PLEG): container finished" podID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerID="90558a52ac83d67af670de478e94d8575dbd0d29f9c3d41de8632a79fcf19341" exitCode=0 Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.261255 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerDied","Data":"90558a52ac83d67af670de478e94d8575dbd0d29f9c3d41de8632a79fcf19341"} Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.261430 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9855d957-a352-426f-8b46-0f77f47c0d6c","Type":"ContainerDied","Data":"73fa06ff434fc50751ad0428b9bb9595d2aaf7ed22bfaab172662d836d0b0092"} Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.261453 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73fa06ff434fc50751ad0428b9bb9595d2aaf7ed22bfaab172662d836d0b0092" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.325233 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.469569 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-run-httpd\") pod \"9855d957-a352-426f-8b46-0f77f47c0d6c\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.469633 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-scripts\") pod \"9855d957-a352-426f-8b46-0f77f47c0d6c\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.469720 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-sg-core-conf-yaml\") pod \"9855d957-a352-426f-8b46-0f77f47c0d6c\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.469793 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-combined-ca-bundle\") pod \"9855d957-a352-426f-8b46-0f77f47c0d6c\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.469835 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hktpl\" (UniqueName: \"kubernetes.io/projected/9855d957-a352-426f-8b46-0f77f47c0d6c-kube-api-access-hktpl\") pod \"9855d957-a352-426f-8b46-0f77f47c0d6c\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.469869 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-config-data\") pod \"9855d957-a352-426f-8b46-0f77f47c0d6c\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.469973 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-log-httpd\") pod \"9855d957-a352-426f-8b46-0f77f47c0d6c\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.470066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-ceilometer-tls-certs\") pod \"9855d957-a352-426f-8b46-0f77f47c0d6c\" (UID: \"9855d957-a352-426f-8b46-0f77f47c0d6c\") " Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.470905 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9855d957-a352-426f-8b46-0f77f47c0d6c" (UID: "9855d957-a352-426f-8b46-0f77f47c0d6c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.473359 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9855d957-a352-426f-8b46-0f77f47c0d6c" (UID: "9855d957-a352-426f-8b46-0f77f47c0d6c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.479709 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9855d957-a352-426f-8b46-0f77f47c0d6c-kube-api-access-hktpl" (OuterVolumeSpecName: "kube-api-access-hktpl") pod "9855d957-a352-426f-8b46-0f77f47c0d6c" (UID: "9855d957-a352-426f-8b46-0f77f47c0d6c"). InnerVolumeSpecName "kube-api-access-hktpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.481001 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-scripts" (OuterVolumeSpecName: "scripts") pod "9855d957-a352-426f-8b46-0f77f47c0d6c" (UID: "9855d957-a352-426f-8b46-0f77f47c0d6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.507102 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9855d957-a352-426f-8b46-0f77f47c0d6c" (UID: "9855d957-a352-426f-8b46-0f77f47c0d6c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.540955 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9855d957-a352-426f-8b46-0f77f47c0d6c" (UID: "9855d957-a352-426f-8b46-0f77f47c0d6c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.572463 4820 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.572509 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hktpl\" (UniqueName: \"kubernetes.io/projected/9855d957-a352-426f-8b46-0f77f47c0d6c-kube-api-access-hktpl\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.572527 4820 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.572538 4820 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.572550 4820 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9855d957-a352-426f-8b46-0f77f47c0d6c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.572563 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.577641 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9855d957-a352-426f-8b46-0f77f47c0d6c" (UID: "9855d957-a352-426f-8b46-0f77f47c0d6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.582488 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-config-data" (OuterVolumeSpecName: "config-data") pod "9855d957-a352-426f-8b46-0f77f47c0d6c" (UID: "9855d957-a352-426f-8b46-0f77f47c0d6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.674628 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:53 crc kubenswrapper[4820]: I0201 15:07:53.674657 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9855d957-a352-426f-8b46-0f77f47c0d6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.272284 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.328769 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.341001 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357286 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 01 15:07:54 crc kubenswrapper[4820]: E0201 15:07:54.357652 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" containerName="dnsmasq-dns" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357671 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" containerName="dnsmasq-dns" Feb 01 15:07:54 crc kubenswrapper[4820]: E0201 15:07:54.357697 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="sg-core" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357706 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="sg-core" Feb 01 15:07:54 crc kubenswrapper[4820]: E0201 15:07:54.357718 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" containerName="init" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357723 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" containerName="init" Feb 01 15:07:54 crc kubenswrapper[4820]: E0201 15:07:54.357733 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="proxy-httpd" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357738 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="proxy-httpd" Feb 01 15:07:54 crc kubenswrapper[4820]: E0201 15:07:54.357746 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="ceilometer-central-agent" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357751 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="ceilometer-central-agent" Feb 01 15:07:54 crc kubenswrapper[4820]: E0201 15:07:54.357779 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="ceilometer-notification-agent" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357786 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="ceilometer-notification-agent" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357966 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="sg-core" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.357980 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="ceilometer-central-agent" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.358001 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4122afb2-67c9-4360-b5d0-72ab7b8bc7ca" containerName="dnsmasq-dns" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.358011 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="proxy-httpd" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.358021 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" containerName="ceilometer-notification-agent" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.359622 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.366709 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.367034 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.367055 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.382044 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.491249 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/570bc5fa-118c-47b4-84fa-fac88c5dc213-log-httpd\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.491311 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.491390 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-scripts\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.491609 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-config-data\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.491762 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.492134 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.492387 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/570bc5fa-118c-47b4-84fa-fac88c5dc213-run-httpd\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.492474 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lmvr\" (UniqueName: \"kubernetes.io/projected/570bc5fa-118c-47b4-84fa-fac88c5dc213-kube-api-access-8lmvr\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.595298 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-config-data\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.595391 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.595513 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.595596 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/570bc5fa-118c-47b4-84fa-fac88c5dc213-run-httpd\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.595640 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lmvr\" (UniqueName: \"kubernetes.io/projected/570bc5fa-118c-47b4-84fa-fac88c5dc213-kube-api-access-8lmvr\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.595716 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/570bc5fa-118c-47b4-84fa-fac88c5dc213-log-httpd\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.595764 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.595868 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-scripts\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.596228 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/570bc5fa-118c-47b4-84fa-fac88c5dc213-log-httpd\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.596411 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/570bc5fa-118c-47b4-84fa-fac88c5dc213-run-httpd\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.601613 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.601983 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.603943 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-config-data\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.606112 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-scripts\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.616261 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lmvr\" (UniqueName: \"kubernetes.io/projected/570bc5fa-118c-47b4-84fa-fac88c5dc213-kube-api-access-8lmvr\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.623848 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/570bc5fa-118c-47b4-84fa-fac88c5dc213-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"570bc5fa-118c-47b4-84fa-fac88c5dc213\") " pod="openstack/ceilometer-0" Feb 01 15:07:54 crc kubenswrapper[4820]: I0201 15:07:54.680320 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 01 15:07:55 crc kubenswrapper[4820]: I0201 15:07:55.195267 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 01 15:07:55 crc kubenswrapper[4820]: W0201 15:07:55.195943 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod570bc5fa_118c_47b4_84fa_fac88c5dc213.slice/crio-71894f20dd85ff1d1f3ae8b2057f202b6fe4d085b66975f6bc40a40a9757767d WatchSource:0}: Error finding container 71894f20dd85ff1d1f3ae8b2057f202b6fe4d085b66975f6bc40a40a9757767d: Status 404 returned error can't find the container with id 71894f20dd85ff1d1f3ae8b2057f202b6fe4d085b66975f6bc40a40a9757767d Feb 01 15:07:55 crc kubenswrapper[4820]: I0201 15:07:55.213040 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9855d957-a352-426f-8b46-0f77f47c0d6c" path="/var/lib/kubelet/pods/9855d957-a352-426f-8b46-0f77f47c0d6c/volumes" Feb 01 15:07:55 crc kubenswrapper[4820]: I0201 15:07:55.284733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"570bc5fa-118c-47b4-84fa-fac88c5dc213","Type":"ContainerStarted","Data":"71894f20dd85ff1d1f3ae8b2057f202b6fe4d085b66975f6bc40a40a9757767d"} Feb 01 15:07:56 crc kubenswrapper[4820]: I0201 15:07:56.294413 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"570bc5fa-118c-47b4-84fa-fac88c5dc213","Type":"ContainerStarted","Data":"cad5ea7d6b306acb98645cfde2956998da3787223ed799de339fcec4060a59c3"} Feb 01 15:07:57 crc kubenswrapper[4820]: I0201 15:07:57.307625 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"570bc5fa-118c-47b4-84fa-fac88c5dc213","Type":"ContainerStarted","Data":"bda3bed6acda30eb9dda8d7f6e1189706f3ca1bc2e8afd5fdd064d7b933df7fc"} Feb 01 15:07:58 crc kubenswrapper[4820]: I0201 15:07:58.322143 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"570bc5fa-118c-47b4-84fa-fac88c5dc213","Type":"ContainerStarted","Data":"2c6a43dcdad3f37c675dd2bd8c2d4a0080ec9ce3a5aecb8e791daba6ea8c5b85"} Feb 01 15:07:58 crc kubenswrapper[4820]: I0201 15:07:58.691703 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.085783 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.175929 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.263378 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79c9fd7b88-c7gn4" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.324309 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.350371 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" containerName="manila-scheduler" containerID="cri-o://1d6bd23d31dbc2ce207cd0f992ee89c2c2b8e79421952cbe59a5ee01d70fab36" gracePeriod=30 Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.352100 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"570bc5fa-118c-47b4-84fa-fac88c5dc213","Type":"ContainerStarted","Data":"219e2e14db50ce392248dbed94b86c380d776f3be96734cf1abd97b9c36a44ad"} Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.352148 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.352208 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" containerName="probe" containerID="cri-o://c351321aefcc3e3c1f60c56ccf42a8deee63cb200083f647d0e6701c38d250d7" gracePeriod=30 Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.394593 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.394911 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerName="manila-share" containerID="cri-o://9b3674160ff59a2438bf2253de9e3b9d65a669fad0f86dc3791d1478efd58758" gracePeriod=30 Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.395051 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerName="probe" containerID="cri-o://3dc9541cf2fe28573a7812a929dd59b918036af87c058a71a24b3e11ebd7a793" gracePeriod=30 Feb 01 15:08:00 crc kubenswrapper[4820]: I0201 15:08:00.423359 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.062936626 podStartE2EDuration="6.423338975s" podCreationTimestamp="2026-02-01 15:07:54 +0000 UTC" firstStartedPulling="2026-02-01 15:07:55.199062657 +0000 UTC m=+2816.719428941" lastFinishedPulling="2026-02-01 15:07:59.559465006 +0000 UTC m=+2821.079831290" observedRunningTime="2026-02-01 15:08:00.415014315 +0000 UTC m=+2821.935380609" watchObservedRunningTime="2026-02-01 15:08:00.423338975 +0000 UTC m=+2821.943705269" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.390937 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerID="3dc9541cf2fe28573a7812a929dd59b918036af87c058a71a24b3e11ebd7a793" exitCode=0 Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.391320 4820 generic.go:334] "Generic (PLEG): container finished" podID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerID="9b3674160ff59a2438bf2253de9e3b9d65a669fad0f86dc3791d1478efd58758" exitCode=1 Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.391039 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ef4f913d-b12b-4cf0-af89-7b289df9ceed","Type":"ContainerDied","Data":"3dc9541cf2fe28573a7812a929dd59b918036af87c058a71a24b3e11ebd7a793"} Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.391387 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ef4f913d-b12b-4cf0-af89-7b289df9ceed","Type":"ContainerDied","Data":"9b3674160ff59a2438bf2253de9e3b9d65a669fad0f86dc3791d1478efd58758"} Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.396547 4820 generic.go:334] "Generic (PLEG): container finished" podID="f71b0669-a307-45e5-8950-a81d88db9cac" containerID="c351321aefcc3e3c1f60c56ccf42a8deee63cb200083f647d0e6701c38d250d7" exitCode=0 Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.396567 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f71b0669-a307-45e5-8950-a81d88db9cac","Type":"ContainerDied","Data":"c351321aefcc3e3c1f60c56ccf42a8deee63cb200083f647d0e6701c38d250d7"} Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.563336 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.700066 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-combined-ca-bundle\") pod \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.700436 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-etc-machine-id\") pod \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.700479 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99kfq\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-kube-api-access-99kfq\") pod \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.700580 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data\") pod \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.700637 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data-custom\") pod \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.700659 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-ceph\") pod \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.700675 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-scripts\") pod \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.700733 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-var-lib-manila\") pod \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\" (UID: \"ef4f913d-b12b-4cf0-af89-7b289df9ceed\") " Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.701199 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "ef4f913d-b12b-4cf0-af89-7b289df9ceed" (UID: "ef4f913d-b12b-4cf0-af89-7b289df9ceed"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.702753 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ef4f913d-b12b-4cf0-af89-7b289df9ceed" (UID: "ef4f913d-b12b-4cf0-af89-7b289df9ceed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.710072 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-scripts" (OuterVolumeSpecName: "scripts") pod "ef4f913d-b12b-4cf0-af89-7b289df9ceed" (UID: "ef4f913d-b12b-4cf0-af89-7b289df9ceed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.710103 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-kube-api-access-99kfq" (OuterVolumeSpecName: "kube-api-access-99kfq") pod "ef4f913d-b12b-4cf0-af89-7b289df9ceed" (UID: "ef4f913d-b12b-4cf0-af89-7b289df9ceed"). InnerVolumeSpecName "kube-api-access-99kfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.710742 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-ceph" (OuterVolumeSpecName: "ceph") pod "ef4f913d-b12b-4cf0-af89-7b289df9ceed" (UID: "ef4f913d-b12b-4cf0-af89-7b289df9ceed"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.710846 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ef4f913d-b12b-4cf0-af89-7b289df9ceed" (UID: "ef4f913d-b12b-4cf0-af89-7b289df9ceed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.768206 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef4f913d-b12b-4cf0-af89-7b289df9ceed" (UID: "ef4f913d-b12b-4cf0-af89-7b289df9ceed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.803796 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.804542 4820 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-ceph\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.804691 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.804840 4820 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.805060 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.805204 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef4f913d-b12b-4cf0-af89-7b289df9ceed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.805349 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99kfq\" (UniqueName: \"kubernetes.io/projected/ef4f913d-b12b-4cf0-af89-7b289df9ceed-kube-api-access-99kfq\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.842955 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data" (OuterVolumeSpecName: "config-data") pod "ef4f913d-b12b-4cf0-af89-7b289df9ceed" (UID: "ef4f913d-b12b-4cf0-af89-7b289df9ceed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:01 crc kubenswrapper[4820]: I0201 15:08:01.907413 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4f913d-b12b-4cf0-af89-7b289df9ceed-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.410850 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ef4f913d-b12b-4cf0-af89-7b289df9ceed","Type":"ContainerDied","Data":"75c8218ef2c079300f69d5d14bd59c62c32f9e1907124a729c0c0bbd6fa1d79e"} Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.410950 4820 scope.go:117] "RemoveContainer" containerID="3dc9541cf2fe28573a7812a929dd59b918036af87c058a71a24b3e11ebd7a793" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.411006 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.468329 4820 scope.go:117] "RemoveContainer" containerID="9b3674160ff59a2438bf2253de9e3b9d65a669fad0f86dc3791d1478efd58758" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.480722 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.500948 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.515815 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:08:02 crc kubenswrapper[4820]: E0201 15:08:02.516345 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerName="manila-share" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.516359 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerName="manila-share" Feb 01 15:08:02 crc kubenswrapper[4820]: E0201 15:08:02.516378 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerName="probe" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.516384 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerName="probe" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.516589 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerName="manila-share" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.516611 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" containerName="probe" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.518322 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.525125 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.528167 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.635990 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-config-data\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.636643 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tclwm\" (UniqueName: \"kubernetes.io/projected/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-kube-api-access-tclwm\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.636829 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.637184 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.637288 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.637333 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.637499 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-ceph\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.637829 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-scripts\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.740672 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.740737 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.740759 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.740811 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-ceph\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.740854 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-scripts\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.740942 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-config-data\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.741023 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tclwm\" (UniqueName: \"kubernetes.io/projected/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-kube-api-access-tclwm\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.741096 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.741737 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.741778 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.746576 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.748630 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-scripts\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.749651 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-ceph\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.749745 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-config-data\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.751613 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.766605 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tclwm\" (UniqueName: \"kubernetes.io/projected/7067e66b-2ec5-405e-871a-aad4fd9fd5cd-kube-api-access-tclwm\") pod \"manila-share-share1-0\" (UID: \"7067e66b-2ec5-405e-871a-aad4fd9fd5cd\") " pod="openstack/manila-share-share1-0" Feb 01 15:08:02 crc kubenswrapper[4820]: I0201 15:08:02.837303 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 01 15:08:03 crc kubenswrapper[4820]: I0201 15:08:03.212576 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4f913d-b12b-4cf0-af89-7b289df9ceed" path="/var/lib/kubelet/pods/ef4f913d-b12b-4cf0-af89-7b289df9ceed/volumes" Feb 01 15:08:03 crc kubenswrapper[4820]: I0201 15:08:03.555529 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 01 15:08:03 crc kubenswrapper[4820]: W0201 15:08:03.568908 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7067e66b_2ec5_405e_871a_aad4fd9fd5cd.slice/crio-1e55f94e8fda0ed9fbfa61b1e14138aa2008f42ee97507c98fd4cffa0aba3eaf WatchSource:0}: Error finding container 1e55f94e8fda0ed9fbfa61b1e14138aa2008f42ee97507c98fd4cffa0aba3eaf: Status 404 returned error can't find the container with id 1e55f94e8fda0ed9fbfa61b1e14138aa2008f42ee97507c98fd4cffa0aba3eaf Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.442400 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7067e66b-2ec5-405e-871a-aad4fd9fd5cd","Type":"ContainerStarted","Data":"4ae04e18ac2d82453c7e59dd0a627999524b19d4ce2f04c6fc19b3a68fee729b"} Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.443284 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7067e66b-2ec5-405e-871a-aad4fd9fd5cd","Type":"ContainerStarted","Data":"1e55f94e8fda0ed9fbfa61b1e14138aa2008f42ee97507c98fd4cffa0aba3eaf"} Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.447252 4820 generic.go:334] "Generic (PLEG): container finished" podID="f71b0669-a307-45e5-8950-a81d88db9cac" containerID="1d6bd23d31dbc2ce207cd0f992ee89c2c2b8e79421952cbe59a5ee01d70fab36" exitCode=0 Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.447285 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f71b0669-a307-45e5-8950-a81d88db9cac","Type":"ContainerDied","Data":"1d6bd23d31dbc2ce207cd0f992ee89c2c2b8e79421952cbe59a5ee01d70fab36"} Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.489753 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.600898 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data\") pod \"f71b0669-a307-45e5-8950-a81d88db9cac\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.600994 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-scripts\") pod \"f71b0669-a307-45e5-8950-a81d88db9cac\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.601033 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data-custom\") pod \"f71b0669-a307-45e5-8950-a81d88db9cac\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.601082 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj8mn\" (UniqueName: \"kubernetes.io/projected/f71b0669-a307-45e5-8950-a81d88db9cac-kube-api-access-xj8mn\") pod \"f71b0669-a307-45e5-8950-a81d88db9cac\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.601274 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f71b0669-a307-45e5-8950-a81d88db9cac-etc-machine-id\") pod \"f71b0669-a307-45e5-8950-a81d88db9cac\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.601297 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-combined-ca-bundle\") pod \"f71b0669-a307-45e5-8950-a81d88db9cac\" (UID: \"f71b0669-a307-45e5-8950-a81d88db9cac\") " Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.602622 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f71b0669-a307-45e5-8950-a81d88db9cac-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f71b0669-a307-45e5-8950-a81d88db9cac" (UID: "f71b0669-a307-45e5-8950-a81d88db9cac"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.610138 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-scripts" (OuterVolumeSpecName: "scripts") pod "f71b0669-a307-45e5-8950-a81d88db9cac" (UID: "f71b0669-a307-45e5-8950-a81d88db9cac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.611016 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f71b0669-a307-45e5-8950-a81d88db9cac" (UID: "f71b0669-a307-45e5-8950-a81d88db9cac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.611093 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71b0669-a307-45e5-8950-a81d88db9cac-kube-api-access-xj8mn" (OuterVolumeSpecName: "kube-api-access-xj8mn") pod "f71b0669-a307-45e5-8950-a81d88db9cac" (UID: "f71b0669-a307-45e5-8950-a81d88db9cac"). InnerVolumeSpecName "kube-api-access-xj8mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.661472 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f71b0669-a307-45e5-8950-a81d88db9cac" (UID: "f71b0669-a307-45e5-8950-a81d88db9cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.704267 4820 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f71b0669-a307-45e5-8950-a81d88db9cac-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.704305 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.704320 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.704334 4820 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.704346 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj8mn\" (UniqueName: \"kubernetes.io/projected/f71b0669-a307-45e5-8950-a81d88db9cac-kube-api-access-xj8mn\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.713254 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data" (OuterVolumeSpecName: "config-data") pod "f71b0669-a307-45e5-8950-a81d88db9cac" (UID: "f71b0669-a307-45e5-8950-a81d88db9cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:04 crc kubenswrapper[4820]: I0201 15:08:04.805819 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71b0669-a307-45e5-8950-a81d88db9cac-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:05 crc kubenswrapper[4820]: E0201 15:08:05.441682 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf71b0669_a307_45e5_8950_a81d88db9cac.slice/crio-c86ff21ef1d261b1048cf9177ca7a6958ece16cc6da522926002e85d0522221a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf71b0669_a307_45e5_8950_a81d88db9cac.slice\": RecentStats: unable to find data in memory cache]" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.460053 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f71b0669-a307-45e5-8950-a81d88db9cac","Type":"ContainerDied","Data":"c86ff21ef1d261b1048cf9177ca7a6958ece16cc6da522926002e85d0522221a"} Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.460121 4820 scope.go:117] "RemoveContainer" containerID="c351321aefcc3e3c1f60c56ccf42a8deee63cb200083f647d0e6701c38d250d7" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.460266 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.463756 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7067e66b-2ec5-405e-871a-aad4fd9fd5cd","Type":"ContainerStarted","Data":"6e984ab4af0ab9f80094fec02ada5fc9218e49697d3b952d3dff76b314a6020c"} Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.484510 4820 scope.go:117] "RemoveContainer" containerID="1d6bd23d31dbc2ce207cd0f992ee89c2c2b8e79421952cbe59a5ee01d70fab36" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.507539 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.5075026620000003 podStartE2EDuration="3.507502662s" podCreationTimestamp="2026-02-01 15:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:08:05.49154347 +0000 UTC m=+2827.011909754" watchObservedRunningTime="2026-02-01 15:08:05.507502662 +0000 UTC m=+2827.027868986" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.520975 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.537384 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.546124 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:08:05 crc kubenswrapper[4820]: E0201 15:08:05.546689 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" containerName="manila-scheduler" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.546711 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" containerName="manila-scheduler" Feb 01 15:08:05 crc kubenswrapper[4820]: E0201 15:08:05.546721 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" containerName="probe" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.546728 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" containerName="probe" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.547030 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" containerName="manila-scheduler" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.547075 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" containerName="probe" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.550929 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.554737 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.556457 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.634251 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-config-data\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.634319 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.634349 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-scripts\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.634449 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.634467 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.634500 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b7c7\" (UniqueName: \"kubernetes.io/projected/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-kube-api-access-7b7c7\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.736955 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.737029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.737068 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b7c7\" (UniqueName: \"kubernetes.io/projected/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-kube-api-access-7b7c7\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.737071 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.737188 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-config-data\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.737217 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.737237 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-scripts\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.744440 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.744993 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.746569 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-config-data\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.749716 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-scripts\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.776979 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b7c7\" (UniqueName: \"kubernetes.io/projected/21ae71fc-043a-49f0-8b83-5c64edfb9e9c-kube-api-access-7b7c7\") pod \"manila-scheduler-0\" (UID: \"21ae71fc-043a-49f0-8b83-5c64edfb9e9c\") " pod="openstack/manila-scheduler-0" Feb 01 15:08:05 crc kubenswrapper[4820]: I0201 15:08:05.884325 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 01 15:08:06 crc kubenswrapper[4820]: I0201 15:08:06.236440 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 01 15:08:06 crc kubenswrapper[4820]: I0201 15:08:06.438309 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 01 15:08:06 crc kubenswrapper[4820]: I0201 15:08:06.478026 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"21ae71fc-043a-49f0-8b83-5c64edfb9e9c","Type":"ContainerStarted","Data":"08d49f94e8a6e5fa1b161744abbdf17253361ce212152f088cdc6429bb789df5"} Feb 01 15:08:07 crc kubenswrapper[4820]: I0201 15:08:07.235440 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71b0669-a307-45e5-8950-a81d88db9cac" path="/var/lib/kubelet/pods/f71b0669-a307-45e5-8950-a81d88db9cac/volumes" Feb 01 15:08:07 crc kubenswrapper[4820]: I0201 15:08:07.528343 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"21ae71fc-043a-49f0-8b83-5c64edfb9e9c","Type":"ContainerStarted","Data":"b3946eeafbf44187b44ac0ed1a921690ef3cb2cccde02800ddaa5a6c800290b1"} Feb 01 15:08:07 crc kubenswrapper[4820]: I0201 15:08:07.528422 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"21ae71fc-043a-49f0-8b83-5c64edfb9e9c","Type":"ContainerStarted","Data":"0beb3d1aa9f4d38eb89279f01109cfe62b6e91d3df42364bb88cfdcc3b5db9d0"} Feb 01 15:08:07 crc kubenswrapper[4820]: I0201 15:08:07.564548 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.5645238580000003 podStartE2EDuration="2.564523858s" podCreationTimestamp="2026-02-01 15:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:08:07.557252593 +0000 UTC m=+2829.077618877" watchObservedRunningTime="2026-02-01 15:08:07.564523858 +0000 UTC m=+2829.084890152" Feb 01 15:08:10 crc kubenswrapper[4820]: I0201 15:08:10.263391 4820 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79c9fd7b88-c7gn4" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Feb 01 15:08:10 crc kubenswrapper[4820]: I0201 15:08:10.264742 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:08:11 crc kubenswrapper[4820]: I0201 15:08:11.197636 4820 scope.go:117] "RemoveContainer" containerID="de55a0e41edf9855c9c4ab441b97d105ed9be3b272d8f2a729bd53f87631c592" Feb 01 15:08:11 crc kubenswrapper[4820]: I0201 15:08:11.235033 4820 scope.go:117] "RemoveContainer" containerID="90558a52ac83d67af670de478e94d8575dbd0d29f9c3d41de8632a79fcf19341" Feb 01 15:08:11 crc kubenswrapper[4820]: I0201 15:08:11.284582 4820 scope.go:117] "RemoveContainer" containerID="4b379e07fe57f063361a1ce621aa4a84c0e98687f48d8b08a4f1a9f37a91348f" Feb 01 15:08:11 crc kubenswrapper[4820]: I0201 15:08:11.317630 4820 scope.go:117] "RemoveContainer" containerID="baa6707e177e6515b30b164bfaaf7a0ec0d7d8f51bd4b5f6f8a26b93062a8f75" Feb 01 15:08:12 crc kubenswrapper[4820]: I0201 15:08:12.838444 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.629974 4820 generic.go:334] "Generic (PLEG): container finished" podID="9551e678-9809-43e8-8ea2-33c7b873f076" containerID="b28977f89c28d14b467c829ed188a961a6e13bc5d6afb6189bcb2b6659a2f028" exitCode=137 Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.630045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c9fd7b88-c7gn4" event={"ID":"9551e678-9809-43e8-8ea2-33c7b873f076","Type":"ContainerDied","Data":"b28977f89c28d14b467c829ed188a961a6e13bc5d6afb6189bcb2b6659a2f028"} Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.630785 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c9fd7b88-c7gn4" event={"ID":"9551e678-9809-43e8-8ea2-33c7b873f076","Type":"ContainerDied","Data":"b40c40d443c2860f8d5336bc9137602f17d0538268b1e475f66c829e059bdf4d"} Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.630812 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b40c40d443c2860f8d5336bc9137602f17d0538268b1e475f66c829e059bdf4d" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.669183 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.722517 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-scripts\") pod \"9551e678-9809-43e8-8ea2-33c7b873f076\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.722791 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-secret-key\") pod \"9551e678-9809-43e8-8ea2-33c7b873f076\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.722859 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-config-data\") pod \"9551e678-9809-43e8-8ea2-33c7b873f076\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.722960 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-tls-certs\") pod \"9551e678-9809-43e8-8ea2-33c7b873f076\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.723019 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-combined-ca-bundle\") pod \"9551e678-9809-43e8-8ea2-33c7b873f076\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.723074 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9551e678-9809-43e8-8ea2-33c7b873f076-logs\") pod \"9551e678-9809-43e8-8ea2-33c7b873f076\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.723122 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdjlb\" (UniqueName: \"kubernetes.io/projected/9551e678-9809-43e8-8ea2-33c7b873f076-kube-api-access-fdjlb\") pod \"9551e678-9809-43e8-8ea2-33c7b873f076\" (UID: \"9551e678-9809-43e8-8ea2-33c7b873f076\") " Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.734426 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9551e678-9809-43e8-8ea2-33c7b873f076-kube-api-access-fdjlb" (OuterVolumeSpecName: "kube-api-access-fdjlb") pod "9551e678-9809-43e8-8ea2-33c7b873f076" (UID: "9551e678-9809-43e8-8ea2-33c7b873f076"). InnerVolumeSpecName "kube-api-access-fdjlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.737348 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9551e678-9809-43e8-8ea2-33c7b873f076-logs" (OuterVolumeSpecName: "logs") pod "9551e678-9809-43e8-8ea2-33c7b873f076" (UID: "9551e678-9809-43e8-8ea2-33c7b873f076"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.748543 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9551e678-9809-43e8-8ea2-33c7b873f076" (UID: "9551e678-9809-43e8-8ea2-33c7b873f076"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.775113 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-scripts" (OuterVolumeSpecName: "scripts") pod "9551e678-9809-43e8-8ea2-33c7b873f076" (UID: "9551e678-9809-43e8-8ea2-33c7b873f076"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.775146 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-config-data" (OuterVolumeSpecName: "config-data") pod "9551e678-9809-43e8-8ea2-33c7b873f076" (UID: "9551e678-9809-43e8-8ea2-33c7b873f076"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.784121 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9551e678-9809-43e8-8ea2-33c7b873f076" (UID: "9551e678-9809-43e8-8ea2-33c7b873f076"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.827825 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.828276 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.828287 4820 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.828297 4820 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9551e678-9809-43e8-8ea2-33c7b873f076-logs\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.828306 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdjlb\" (UniqueName: \"kubernetes.io/projected/9551e678-9809-43e8-8ea2-33c7b873f076-kube-api-access-fdjlb\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.828318 4820 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9551e678-9809-43e8-8ea2-33c7b873f076-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.834267 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9551e678-9809-43e8-8ea2-33c7b873f076" (UID: "9551e678-9809-43e8-8ea2-33c7b873f076"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.885520 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 01 15:08:15 crc kubenswrapper[4820]: I0201 15:08:15.930409 4820 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9551e678-9809-43e8-8ea2-33c7b873f076-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:16 crc kubenswrapper[4820]: I0201 15:08:16.640897 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c9fd7b88-c7gn4" Feb 01 15:08:16 crc kubenswrapper[4820]: I0201 15:08:16.685732 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79c9fd7b88-c7gn4"] Feb 01 15:08:16 crc kubenswrapper[4820]: I0201 15:08:16.693989 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79c9fd7b88-c7gn4"] Feb 01 15:08:17 crc kubenswrapper[4820]: I0201 15:08:17.219820 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" path="/var/lib/kubelet/pods/9551e678-9809-43e8-8ea2-33c7b873f076/volumes" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.838644 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n7vrc"] Feb 01 15:08:21 crc kubenswrapper[4820]: E0201 15:08:21.839811 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon-log" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.839834 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon-log" Feb 01 15:08:21 crc kubenswrapper[4820]: E0201 15:08:21.839900 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.839914 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.840820 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.840864 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9551e678-9809-43e8-8ea2-33c7b873f076" containerName="horizon-log" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.843796 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.851343 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7vrc"] Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.881842 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csglf\" (UniqueName: \"kubernetes.io/projected/94199722-1992-4a86-ad82-335bab2265ea-kube-api-access-csglf\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.882114 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-utilities\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.882283 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-catalog-content\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.985560 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csglf\" (UniqueName: \"kubernetes.io/projected/94199722-1992-4a86-ad82-335bab2265ea-kube-api-access-csglf\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.985699 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-utilities\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.985784 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-catalog-content\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.986596 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-utilities\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:21 crc kubenswrapper[4820]: I0201 15:08:21.986635 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-catalog-content\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:22 crc kubenswrapper[4820]: I0201 15:08:22.013173 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csglf\" (UniqueName: \"kubernetes.io/projected/94199722-1992-4a86-ad82-335bab2265ea-kube-api-access-csglf\") pod \"redhat-marketplace-n7vrc\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:22 crc kubenswrapper[4820]: I0201 15:08:22.183781 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:22 crc kubenswrapper[4820]: I0201 15:08:22.491326 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7vrc"] Feb 01 15:08:22 crc kubenswrapper[4820]: I0201 15:08:22.712443 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7vrc" event={"ID":"94199722-1992-4a86-ad82-335bab2265ea","Type":"ContainerStarted","Data":"b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427"} Feb 01 15:08:22 crc kubenswrapper[4820]: I0201 15:08:22.713166 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7vrc" event={"ID":"94199722-1992-4a86-ad82-335bab2265ea","Type":"ContainerStarted","Data":"71cce30ef32c6fb595e09167d1b8262e7ff3e18e5d1949793bcc5e51f84eb301"} Feb 01 15:08:23 crc kubenswrapper[4820]: I0201 15:08:23.727279 4820 generic.go:334] "Generic (PLEG): container finished" podID="94199722-1992-4a86-ad82-335bab2265ea" containerID="b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427" exitCode=0 Feb 01 15:08:23 crc kubenswrapper[4820]: I0201 15:08:23.727679 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7vrc" event={"ID":"94199722-1992-4a86-ad82-335bab2265ea","Type":"ContainerDied","Data":"b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427"} Feb 01 15:08:24 crc kubenswrapper[4820]: I0201 15:08:24.268053 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 01 15:08:24 crc kubenswrapper[4820]: I0201 15:08:24.697551 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 01 15:08:24 crc kubenswrapper[4820]: I0201 15:08:24.759616 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7vrc" event={"ID":"94199722-1992-4a86-ad82-335bab2265ea","Type":"ContainerStarted","Data":"e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf"} Feb 01 15:08:25 crc kubenswrapper[4820]: I0201 15:08:25.770783 4820 generic.go:334] "Generic (PLEG): container finished" podID="94199722-1992-4a86-ad82-335bab2265ea" containerID="e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf" exitCode=0 Feb 01 15:08:25 crc kubenswrapper[4820]: I0201 15:08:25.770919 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7vrc" event={"ID":"94199722-1992-4a86-ad82-335bab2265ea","Type":"ContainerDied","Data":"e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf"} Feb 01 15:08:26 crc kubenswrapper[4820]: I0201 15:08:26.792422 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7vrc" event={"ID":"94199722-1992-4a86-ad82-335bab2265ea","Type":"ContainerStarted","Data":"2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2"} Feb 01 15:08:26 crc kubenswrapper[4820]: I0201 15:08:26.829745 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n7vrc" podStartSLOduration=3.28457768 podStartE2EDuration="5.829714842s" podCreationTimestamp="2026-02-01 15:08:21 +0000 UTC" firstStartedPulling="2026-02-01 15:08:23.729828365 +0000 UTC m=+2845.250194649" lastFinishedPulling="2026-02-01 15:08:26.274965487 +0000 UTC m=+2847.795331811" observedRunningTime="2026-02-01 15:08:26.818476552 +0000 UTC m=+2848.338842856" watchObservedRunningTime="2026-02-01 15:08:26.829714842 +0000 UTC m=+2848.350081136" Feb 01 15:08:27 crc kubenswrapper[4820]: I0201 15:08:27.558090 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 01 15:08:32 crc kubenswrapper[4820]: I0201 15:08:32.184126 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:32 crc kubenswrapper[4820]: I0201 15:08:32.185469 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:32 crc kubenswrapper[4820]: I0201 15:08:32.265033 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:32 crc kubenswrapper[4820]: I0201 15:08:32.940343 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:33 crc kubenswrapper[4820]: I0201 15:08:33.022544 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7vrc"] Feb 01 15:08:34 crc kubenswrapper[4820]: I0201 15:08:34.886633 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n7vrc" podUID="94199722-1992-4a86-ad82-335bab2265ea" containerName="registry-server" containerID="cri-o://2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2" gracePeriod=2 Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.491927 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.562216 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-catalog-content\") pod \"94199722-1992-4a86-ad82-335bab2265ea\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.562957 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csglf\" (UniqueName: \"kubernetes.io/projected/94199722-1992-4a86-ad82-335bab2265ea-kube-api-access-csglf\") pod \"94199722-1992-4a86-ad82-335bab2265ea\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.563225 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-utilities\") pod \"94199722-1992-4a86-ad82-335bab2265ea\" (UID: \"94199722-1992-4a86-ad82-335bab2265ea\") " Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.564608 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-utilities" (OuterVolumeSpecName: "utilities") pod "94199722-1992-4a86-ad82-335bab2265ea" (UID: "94199722-1992-4a86-ad82-335bab2265ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.575058 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94199722-1992-4a86-ad82-335bab2265ea-kube-api-access-csglf" (OuterVolumeSpecName: "kube-api-access-csglf") pod "94199722-1992-4a86-ad82-335bab2265ea" (UID: "94199722-1992-4a86-ad82-335bab2265ea"). InnerVolumeSpecName "kube-api-access-csglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.603477 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94199722-1992-4a86-ad82-335bab2265ea" (UID: "94199722-1992-4a86-ad82-335bab2265ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.666277 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.666314 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csglf\" (UniqueName: \"kubernetes.io/projected/94199722-1992-4a86-ad82-335bab2265ea-kube-api-access-csglf\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.666327 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94199722-1992-4a86-ad82-335bab2265ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.900488 4820 generic.go:334] "Generic (PLEG): container finished" podID="94199722-1992-4a86-ad82-335bab2265ea" containerID="2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2" exitCode=0 Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.900595 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7vrc" event={"ID":"94199722-1992-4a86-ad82-335bab2265ea","Type":"ContainerDied","Data":"2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2"} Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.900614 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7vrc" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.900662 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7vrc" event={"ID":"94199722-1992-4a86-ad82-335bab2265ea","Type":"ContainerDied","Data":"71cce30ef32c6fb595e09167d1b8262e7ff3e18e5d1949793bcc5e51f84eb301"} Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.900704 4820 scope.go:117] "RemoveContainer" containerID="2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.939480 4820 scope.go:117] "RemoveContainer" containerID="e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf" Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.978454 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7vrc"] Feb 01 15:08:35 crc kubenswrapper[4820]: I0201 15:08:35.995852 4820 scope.go:117] "RemoveContainer" containerID="b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427" Feb 01 15:08:36 crc kubenswrapper[4820]: I0201 15:08:36.013210 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7vrc"] Feb 01 15:08:36 crc kubenswrapper[4820]: I0201 15:08:36.073258 4820 scope.go:117] "RemoveContainer" containerID="2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2" Feb 01 15:08:36 crc kubenswrapper[4820]: E0201 15:08:36.075144 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2\": container with ID starting with 2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2 not found: ID does not exist" containerID="2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2" Feb 01 15:08:36 crc kubenswrapper[4820]: I0201 15:08:36.075193 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2"} err="failed to get container status \"2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2\": rpc error: code = NotFound desc = could not find container \"2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2\": container with ID starting with 2dfa35c98411f607d5caddd6075ebbf762341f99192eafa05f4cbdcfa0dcb8e2 not found: ID does not exist" Feb 01 15:08:36 crc kubenswrapper[4820]: I0201 15:08:36.075227 4820 scope.go:117] "RemoveContainer" containerID="e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf" Feb 01 15:08:36 crc kubenswrapper[4820]: E0201 15:08:36.075631 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf\": container with ID starting with e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf not found: ID does not exist" containerID="e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf" Feb 01 15:08:36 crc kubenswrapper[4820]: I0201 15:08:36.076064 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf"} err="failed to get container status \"e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf\": rpc error: code = NotFound desc = could not find container \"e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf\": container with ID starting with e637b208197643f563de148b3703c0717f1b5dbb94cdfb47d59fb5e0f1bd5dbf not found: ID does not exist" Feb 01 15:08:36 crc kubenswrapper[4820]: I0201 15:08:36.076197 4820 scope.go:117] "RemoveContainer" containerID="b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427" Feb 01 15:08:36 crc kubenswrapper[4820]: E0201 15:08:36.077819 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427\": container with ID starting with b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427 not found: ID does not exist" containerID="b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427" Feb 01 15:08:36 crc kubenswrapper[4820]: I0201 15:08:36.077862 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427"} err="failed to get container status \"b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427\": rpc error: code = NotFound desc = could not find container \"b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427\": container with ID starting with b30341fa953fb685c92f7d7650d16983a9ceac7c365feb2824e75dc3c9bbb427 not found: ID does not exist" Feb 01 15:08:37 crc kubenswrapper[4820]: I0201 15:08:37.212294 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94199722-1992-4a86-ad82-335bab2265ea" path="/var/lib/kubelet/pods/94199722-1992-4a86-ad82-335bab2265ea/volumes" Feb 01 15:08:47 crc kubenswrapper[4820]: E0201 15:08:47.778932 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:53710->38.102.83.73:46051: write tcp 38.102.83.73:53710->38.102.83.73:46051: write: broken pipe Feb 01 15:08:49 crc kubenswrapper[4820]: I0201 15:08:49.242929 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:08:49 crc kubenswrapper[4820]: I0201 15:08:49.243502 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:09:19 crc kubenswrapper[4820]: I0201 15:09:19.243384 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:09:19 crc kubenswrapper[4820]: I0201 15:09:19.244081 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.253846 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 01 15:09:27 crc kubenswrapper[4820]: E0201 15:09:27.254757 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94199722-1992-4a86-ad82-335bab2265ea" containerName="extract-content" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.254775 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="94199722-1992-4a86-ad82-335bab2265ea" containerName="extract-content" Feb 01 15:09:27 crc kubenswrapper[4820]: E0201 15:09:27.254808 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94199722-1992-4a86-ad82-335bab2265ea" containerName="registry-server" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.254817 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="94199722-1992-4a86-ad82-335bab2265ea" containerName="registry-server" Feb 01 15:09:27 crc kubenswrapper[4820]: E0201 15:09:27.254842 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94199722-1992-4a86-ad82-335bab2265ea" containerName="extract-utilities" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.254850 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="94199722-1992-4a86-ad82-335bab2265ea" containerName="extract-utilities" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.255092 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="94199722-1992-4a86-ad82-335bab2265ea" containerName="registry-server" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.255809 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.260487 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.260692 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.260722 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zqzp5" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.260963 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.277036 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.339415 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.339479 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.339512 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.441416 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.441814 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvhlt\" (UniqueName: \"kubernetes.io/projected/9b040975-c603-4b7a-875c-c372ddb0e24e-kube-api-access-wvhlt\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.442020 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.442367 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.442447 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.442515 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.444629 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.444833 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.445029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.447566 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.448230 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.448930 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.550498 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.550630 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.550697 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvhlt\" (UniqueName: \"kubernetes.io/projected/9b040975-c603-4b7a-875c-c372ddb0e24e-kube-api-access-wvhlt\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.550733 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.550752 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.550771 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.552254 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.552300 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.552545 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.556287 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.556703 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.576978 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvhlt\" (UniqueName: \"kubernetes.io/projected/9b040975-c603-4b7a-875c-c372ddb0e24e-kube-api-access-wvhlt\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.596321 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " pod="openstack/tempest-tests-tempest" Feb 01 15:09:27 crc kubenswrapper[4820]: I0201 15:09:27.876928 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 01 15:09:28 crc kubenswrapper[4820]: I0201 15:09:28.399619 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 15:09:28 crc kubenswrapper[4820]: I0201 15:09:28.407207 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 01 15:09:28 crc kubenswrapper[4820]: I0201 15:09:28.526030 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9b040975-c603-4b7a-875c-c372ddb0e24e","Type":"ContainerStarted","Data":"f4f922a60716e2ef1f0eda797c14245a4d1d5cf8cf3f49bb9413729ab3dd7ce1"} Feb 01 15:09:49 crc kubenswrapper[4820]: I0201 15:09:49.243221 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:09:49 crc kubenswrapper[4820]: I0201 15:09:49.244265 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:09:49 crc kubenswrapper[4820]: I0201 15:09:49.244349 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 15:09:49 crc kubenswrapper[4820]: I0201 15:09:49.245407 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 15:09:49 crc kubenswrapper[4820]: I0201 15:09:49.245502 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" gracePeriod=600 Feb 01 15:09:49 crc kubenswrapper[4820]: I0201 15:09:49.745655 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" exitCode=0 Feb 01 15:09:49 crc kubenswrapper[4820]: I0201 15:09:49.745787 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589"} Feb 01 15:09:49 crc kubenswrapper[4820]: I0201 15:09:49.746451 4820 scope.go:117] "RemoveContainer" containerID="0d61213b4a10a4ab74b8c67738ccbaf8ce69c525fdf88a8f53d56aa59cdd82b9" Feb 01 15:10:17 crc kubenswrapper[4820]: E0201 15:10:17.277838 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:10:17 crc kubenswrapper[4820]: E0201 15:10:17.324862 4820 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 01 15:10:17 crc kubenswrapper[4820]: E0201 15:10:17.325080 4820 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvhlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9b040975-c603-4b7a-875c-c372ddb0e24e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 01 15:10:17 crc kubenswrapper[4820]: E0201 15:10:17.326351 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9b040975-c603-4b7a-875c-c372ddb0e24e" Feb 01 15:10:18 crc kubenswrapper[4820]: I0201 15:10:18.059409 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:10:18 crc kubenswrapper[4820]: E0201 15:10:18.059939 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:10:18 crc kubenswrapper[4820]: E0201 15:10:18.061420 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9b040975-c603-4b7a-875c-c372ddb0e24e" Feb 01 15:10:30 crc kubenswrapper[4820]: I0201 15:10:30.199195 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:10:30 crc kubenswrapper[4820]: E0201 15:10:30.200468 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:10:32 crc kubenswrapper[4820]: I0201 15:10:32.728465 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 01 15:10:34 crc kubenswrapper[4820]: I0201 15:10:34.244204 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9b040975-c603-4b7a-875c-c372ddb0e24e","Type":"ContainerStarted","Data":"71364fb429f365da3376852eac69aacfccf731c8004e5a699fa622bd359ffaf6"} Feb 01 15:10:34 crc kubenswrapper[4820]: I0201 15:10:34.284165 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.9583549700000003 podStartE2EDuration="1m8.284143346s" podCreationTimestamp="2026-02-01 15:09:26 +0000 UTC" firstStartedPulling="2026-02-01 15:09:28.399287038 +0000 UTC m=+2909.919653342" lastFinishedPulling="2026-02-01 15:10:32.725075394 +0000 UTC m=+2974.245441718" observedRunningTime="2026-02-01 15:10:34.271114639 +0000 UTC m=+2975.791480953" watchObservedRunningTime="2026-02-01 15:10:34.284143346 +0000 UTC m=+2975.804509650" Feb 01 15:10:41 crc kubenswrapper[4820]: I0201 15:10:41.199432 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:10:41 crc kubenswrapper[4820]: E0201 15:10:41.200308 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:10:56 crc kubenswrapper[4820]: I0201 15:10:56.200153 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:10:56 crc kubenswrapper[4820]: E0201 15:10:56.201041 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:11:08 crc kubenswrapper[4820]: I0201 15:11:08.199900 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:11:08 crc kubenswrapper[4820]: E0201 15:11:08.200804 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:11:21 crc kubenswrapper[4820]: I0201 15:11:21.199750 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:11:21 crc kubenswrapper[4820]: E0201 15:11:21.200632 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:11:36 crc kubenswrapper[4820]: I0201 15:11:36.199177 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:11:36 crc kubenswrapper[4820]: E0201 15:11:36.200967 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:11:50 crc kubenswrapper[4820]: I0201 15:11:50.199047 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:11:50 crc kubenswrapper[4820]: E0201 15:11:50.200076 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:12:03 crc kubenswrapper[4820]: I0201 15:12:03.199461 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:12:03 crc kubenswrapper[4820]: E0201 15:12:03.200563 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:12:15 crc kubenswrapper[4820]: I0201 15:12:15.696707 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:12:15 crc kubenswrapper[4820]: E0201 15:12:15.704508 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:12:31 crc kubenswrapper[4820]: I0201 15:12:31.199733 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:12:31 crc kubenswrapper[4820]: E0201 15:12:31.200436 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:12:42 crc kubenswrapper[4820]: I0201 15:12:42.199150 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:12:42 crc kubenswrapper[4820]: E0201 15:12:42.200299 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:12:47 crc kubenswrapper[4820]: I0201 15:12:47.907079 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kp8wx"] Feb 01 15:12:47 crc kubenswrapper[4820]: I0201 15:12:47.912516 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:47 crc kubenswrapper[4820]: I0201 15:12:47.922977 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kp8wx"] Feb 01 15:12:47 crc kubenswrapper[4820]: I0201 15:12:47.995669 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr9n2\" (UniqueName: \"kubernetes.io/projected/f559dd05-941b-467c-9137-8ad66e1e8fb9-kube-api-access-pr9n2\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:47 crc kubenswrapper[4820]: I0201 15:12:47.995749 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-utilities\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:47 crc kubenswrapper[4820]: I0201 15:12:47.996002 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-catalog-content\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.098532 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr9n2\" (UniqueName: \"kubernetes.io/projected/f559dd05-941b-467c-9137-8ad66e1e8fb9-kube-api-access-pr9n2\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.098604 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-utilities\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.098666 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-catalog-content\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.099223 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-catalog-content\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.099384 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-utilities\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.120946 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr9n2\" (UniqueName: \"kubernetes.io/projected/f559dd05-941b-467c-9137-8ad66e1e8fb9-kube-api-access-pr9n2\") pod \"community-operators-kp8wx\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.236359 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.791021 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kp8wx"] Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.897110 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xdzq9"] Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.899646 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:48 crc kubenswrapper[4820]: I0201 15:12:48.913716 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xdzq9"] Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.026995 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kp8wx" event={"ID":"f559dd05-941b-467c-9137-8ad66e1e8fb9","Type":"ContainerStarted","Data":"0718d51a4cfcd43cc50fa096fb26112a6cc97ca28adad77e9fb104dd67739145"} Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.027073 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-catalog-content\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.027464 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-utilities\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.027542 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65jpv\" (UniqueName: \"kubernetes.io/projected/42cc0bea-76f0-4c25-959a-49c9f1399906-kube-api-access-65jpv\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.129846 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-utilities\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.129980 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65jpv\" (UniqueName: \"kubernetes.io/projected/42cc0bea-76f0-4c25-959a-49c9f1399906-kube-api-access-65jpv\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.130064 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-catalog-content\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.130840 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-catalog-content\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.130842 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-utilities\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.158816 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65jpv\" (UniqueName: \"kubernetes.io/projected/42cc0bea-76f0-4c25-959a-49c9f1399906-kube-api-access-65jpv\") pod \"certified-operators-xdzq9\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.257527 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:49 crc kubenswrapper[4820]: I0201 15:12:49.786027 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xdzq9"] Feb 01 15:12:49 crc kubenswrapper[4820]: W0201 15:12:49.796455 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42cc0bea_76f0_4c25_959a_49c9f1399906.slice/crio-28e2ba1f14e1a1e875b79be44fd99750521adba4513fc9ea7d4123da0bb37ddd WatchSource:0}: Error finding container 28e2ba1f14e1a1e875b79be44fd99750521adba4513fc9ea7d4123da0bb37ddd: Status 404 returned error can't find the container with id 28e2ba1f14e1a1e875b79be44fd99750521adba4513fc9ea7d4123da0bb37ddd Feb 01 15:12:50 crc kubenswrapper[4820]: I0201 15:12:50.035852 4820 generic.go:334] "Generic (PLEG): container finished" podID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerID="e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7" exitCode=0 Feb 01 15:12:50 crc kubenswrapper[4820]: I0201 15:12:50.035910 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kp8wx" event={"ID":"f559dd05-941b-467c-9137-8ad66e1e8fb9","Type":"ContainerDied","Data":"e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7"} Feb 01 15:12:50 crc kubenswrapper[4820]: I0201 15:12:50.037735 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdzq9" event={"ID":"42cc0bea-76f0-4c25-959a-49c9f1399906","Type":"ContainerStarted","Data":"28e2ba1f14e1a1e875b79be44fd99750521adba4513fc9ea7d4123da0bb37ddd"} Feb 01 15:12:51 crc kubenswrapper[4820]: I0201 15:12:51.049591 4820 generic.go:334] "Generic (PLEG): container finished" podID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerID="cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0" exitCode=0 Feb 01 15:12:51 crc kubenswrapper[4820]: I0201 15:12:51.050078 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdzq9" event={"ID":"42cc0bea-76f0-4c25-959a-49c9f1399906","Type":"ContainerDied","Data":"cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0"} Feb 01 15:12:52 crc kubenswrapper[4820]: I0201 15:12:52.063025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kp8wx" event={"ID":"f559dd05-941b-467c-9137-8ad66e1e8fb9","Type":"ContainerStarted","Data":"2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5"} Feb 01 15:12:53 crc kubenswrapper[4820]: I0201 15:12:53.077192 4820 generic.go:334] "Generic (PLEG): container finished" podID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerID="2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5" exitCode=0 Feb 01 15:12:53 crc kubenswrapper[4820]: I0201 15:12:53.077267 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kp8wx" event={"ID":"f559dd05-941b-467c-9137-8ad66e1e8fb9","Type":"ContainerDied","Data":"2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5"} Feb 01 15:12:53 crc kubenswrapper[4820]: I0201 15:12:53.199935 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:12:53 crc kubenswrapper[4820]: E0201 15:12:53.200361 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:12:54 crc kubenswrapper[4820]: I0201 15:12:54.090586 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdzq9" event={"ID":"42cc0bea-76f0-4c25-959a-49c9f1399906","Type":"ContainerStarted","Data":"bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146"} Feb 01 15:12:55 crc kubenswrapper[4820]: I0201 15:12:55.108509 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kp8wx" event={"ID":"f559dd05-941b-467c-9137-8ad66e1e8fb9","Type":"ContainerStarted","Data":"18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2"} Feb 01 15:12:55 crc kubenswrapper[4820]: I0201 15:12:55.133869 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kp8wx" podStartSLOduration=4.455731406 podStartE2EDuration="8.133842698s" podCreationTimestamp="2026-02-01 15:12:47 +0000 UTC" firstStartedPulling="2026-02-01 15:12:50.038104693 +0000 UTC m=+3111.558470977" lastFinishedPulling="2026-02-01 15:12:53.716215985 +0000 UTC m=+3115.236582269" observedRunningTime="2026-02-01 15:12:55.124787668 +0000 UTC m=+3116.645153962" watchObservedRunningTime="2026-02-01 15:12:55.133842698 +0000 UTC m=+3116.654208992" Feb 01 15:12:57 crc kubenswrapper[4820]: I0201 15:12:57.135600 4820 generic.go:334] "Generic (PLEG): container finished" podID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerID="bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146" exitCode=0 Feb 01 15:12:57 crc kubenswrapper[4820]: I0201 15:12:57.135707 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdzq9" event={"ID":"42cc0bea-76f0-4c25-959a-49c9f1399906","Type":"ContainerDied","Data":"bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146"} Feb 01 15:12:58 crc kubenswrapper[4820]: I0201 15:12:58.151572 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdzq9" event={"ID":"42cc0bea-76f0-4c25-959a-49c9f1399906","Type":"ContainerStarted","Data":"fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931"} Feb 01 15:12:58 crc kubenswrapper[4820]: I0201 15:12:58.178912 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xdzq9" podStartSLOduration=3.894378483 podStartE2EDuration="10.178867809s" podCreationTimestamp="2026-02-01 15:12:48 +0000 UTC" firstStartedPulling="2026-02-01 15:12:51.251769261 +0000 UTC m=+3112.772135545" lastFinishedPulling="2026-02-01 15:12:57.536258577 +0000 UTC m=+3119.056624871" observedRunningTime="2026-02-01 15:12:58.173046837 +0000 UTC m=+3119.693413131" watchObservedRunningTime="2026-02-01 15:12:58.178867809 +0000 UTC m=+3119.699234113" Feb 01 15:12:58 crc kubenswrapper[4820]: I0201 15:12:58.237396 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:58 crc kubenswrapper[4820]: I0201 15:12:58.237443 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:58 crc kubenswrapper[4820]: I0201 15:12:58.292550 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:59 crc kubenswrapper[4820]: I0201 15:12:59.213252 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:12:59 crc kubenswrapper[4820]: I0201 15:12:59.258007 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:12:59 crc kubenswrapper[4820]: I0201 15:12:59.259260 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:13:00 crc kubenswrapper[4820]: I0201 15:13:00.320950 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xdzq9" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="registry-server" probeResult="failure" output=< Feb 01 15:13:00 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 01 15:13:00 crc kubenswrapper[4820]: > Feb 01 15:13:00 crc kubenswrapper[4820]: I0201 15:13:00.677002 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kp8wx"] Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.174856 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kp8wx" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerName="registry-server" containerID="cri-o://18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2" gracePeriod=2 Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.646006 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.818692 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-catalog-content\") pod \"f559dd05-941b-467c-9137-8ad66e1e8fb9\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.818776 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr9n2\" (UniqueName: \"kubernetes.io/projected/f559dd05-941b-467c-9137-8ad66e1e8fb9-kube-api-access-pr9n2\") pod \"f559dd05-941b-467c-9137-8ad66e1e8fb9\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.818929 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-utilities\") pod \"f559dd05-941b-467c-9137-8ad66e1e8fb9\" (UID: \"f559dd05-941b-467c-9137-8ad66e1e8fb9\") " Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.819638 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-utilities" (OuterVolumeSpecName: "utilities") pod "f559dd05-941b-467c-9137-8ad66e1e8fb9" (UID: "f559dd05-941b-467c-9137-8ad66e1e8fb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.824633 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dd05-941b-467c-9137-8ad66e1e8fb9-kube-api-access-pr9n2" (OuterVolumeSpecName: "kube-api-access-pr9n2") pod "f559dd05-941b-467c-9137-8ad66e1e8fb9" (UID: "f559dd05-941b-467c-9137-8ad66e1e8fb9"). InnerVolumeSpecName "kube-api-access-pr9n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.874899 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f559dd05-941b-467c-9137-8ad66e1e8fb9" (UID: "f559dd05-941b-467c-9137-8ad66e1e8fb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.921279 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.921305 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr9n2\" (UniqueName: \"kubernetes.io/projected/f559dd05-941b-467c-9137-8ad66e1e8fb9-kube-api-access-pr9n2\") on node \"crc\" DevicePath \"\"" Feb 01 15:13:01 crc kubenswrapper[4820]: I0201 15:13:01.921315 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f559dd05-941b-467c-9137-8ad66e1e8fb9-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.184333 4820 generic.go:334] "Generic (PLEG): container finished" podID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerID="18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2" exitCode=0 Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.184424 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kp8wx" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.184426 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kp8wx" event={"ID":"f559dd05-941b-467c-9137-8ad66e1e8fb9","Type":"ContainerDied","Data":"18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2"} Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.185080 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kp8wx" event={"ID":"f559dd05-941b-467c-9137-8ad66e1e8fb9","Type":"ContainerDied","Data":"0718d51a4cfcd43cc50fa096fb26112a6cc97ca28adad77e9fb104dd67739145"} Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.185107 4820 scope.go:117] "RemoveContainer" containerID="18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.217600 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kp8wx"] Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.229526 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kp8wx"] Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.229979 4820 scope.go:117] "RemoveContainer" containerID="2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.256740 4820 scope.go:117] "RemoveContainer" containerID="e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.295690 4820 scope.go:117] "RemoveContainer" containerID="18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2" Feb 01 15:13:02 crc kubenswrapper[4820]: E0201 15:13:02.296264 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2\": container with ID starting with 18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2 not found: ID does not exist" containerID="18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.296292 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2"} err="failed to get container status \"18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2\": rpc error: code = NotFound desc = could not find container \"18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2\": container with ID starting with 18abce35da90e1288d8c4fd753ea86f8c3b69070ca254e803d2d4fcc05ff93e2 not found: ID does not exist" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.296313 4820 scope.go:117] "RemoveContainer" containerID="2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5" Feb 01 15:13:02 crc kubenswrapper[4820]: E0201 15:13:02.296671 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5\": container with ID starting with 2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5 not found: ID does not exist" containerID="2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.296704 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5"} err="failed to get container status \"2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5\": rpc error: code = NotFound desc = could not find container \"2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5\": container with ID starting with 2dfded04501e878edfd4670e164a6f3d5358f96512543ffc3771651b053ce0b5 not found: ID does not exist" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.296718 4820 scope.go:117] "RemoveContainer" containerID="e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7" Feb 01 15:13:02 crc kubenswrapper[4820]: E0201 15:13:02.297120 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7\": container with ID starting with e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7 not found: ID does not exist" containerID="e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7" Feb 01 15:13:02 crc kubenswrapper[4820]: I0201 15:13:02.297147 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7"} err="failed to get container status \"e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7\": rpc error: code = NotFound desc = could not find container \"e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7\": container with ID starting with e3e1e38e94d0dd29f9fee450cf4188ebd8c4debaaae05a990cb5e109cac343a7 not found: ID does not exist" Feb 01 15:13:03 crc kubenswrapper[4820]: I0201 15:13:03.210310 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" path="/var/lib/kubelet/pods/f559dd05-941b-467c-9137-8ad66e1e8fb9/volumes" Feb 01 15:13:06 crc kubenswrapper[4820]: I0201 15:13:06.198946 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:13:06 crc kubenswrapper[4820]: E0201 15:13:06.200521 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:13:09 crc kubenswrapper[4820]: I0201 15:13:09.345161 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:13:09 crc kubenswrapper[4820]: I0201 15:13:09.413616 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:13:09 crc kubenswrapper[4820]: I0201 15:13:09.603232 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xdzq9"] Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.296597 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xdzq9" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="registry-server" containerID="cri-o://fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931" gracePeriod=2 Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.652437 4820 scope.go:117] "RemoveContainer" containerID="b28977f89c28d14b467c829ed188a961a6e13bc5d6afb6189bcb2b6659a2f028" Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.845199 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.923303 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-utilities\") pod \"42cc0bea-76f0-4c25-959a-49c9f1399906\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.923624 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-catalog-content\") pod \"42cc0bea-76f0-4c25-959a-49c9f1399906\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.923759 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65jpv\" (UniqueName: \"kubernetes.io/projected/42cc0bea-76f0-4c25-959a-49c9f1399906-kube-api-access-65jpv\") pod \"42cc0bea-76f0-4c25-959a-49c9f1399906\" (UID: \"42cc0bea-76f0-4c25-959a-49c9f1399906\") " Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.924217 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-utilities" (OuterVolumeSpecName: "utilities") pod "42cc0bea-76f0-4c25-959a-49c9f1399906" (UID: "42cc0bea-76f0-4c25-959a-49c9f1399906"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.925937 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.930856 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cc0bea-76f0-4c25-959a-49c9f1399906-kube-api-access-65jpv" (OuterVolumeSpecName: "kube-api-access-65jpv") pod "42cc0bea-76f0-4c25-959a-49c9f1399906" (UID: "42cc0bea-76f0-4c25-959a-49c9f1399906"). InnerVolumeSpecName "kube-api-access-65jpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:13:11 crc kubenswrapper[4820]: I0201 15:13:11.966989 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42cc0bea-76f0-4c25-959a-49c9f1399906" (UID: "42cc0bea-76f0-4c25-959a-49c9f1399906"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.028667 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42cc0bea-76f0-4c25-959a-49c9f1399906-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.029064 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65jpv\" (UniqueName: \"kubernetes.io/projected/42cc0bea-76f0-4c25-959a-49c9f1399906-kube-api-access-65jpv\") on node \"crc\" DevicePath \"\"" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.311117 4820 generic.go:334] "Generic (PLEG): container finished" podID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerID="fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931" exitCode=0 Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.311175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdzq9" event={"ID":"42cc0bea-76f0-4c25-959a-49c9f1399906","Type":"ContainerDied","Data":"fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931"} Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.311198 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdzq9" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.311212 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdzq9" event={"ID":"42cc0bea-76f0-4c25-959a-49c9f1399906","Type":"ContainerDied","Data":"28e2ba1f14e1a1e875b79be44fd99750521adba4513fc9ea7d4123da0bb37ddd"} Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.311243 4820 scope.go:117] "RemoveContainer" containerID="fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.335774 4820 scope.go:117] "RemoveContainer" containerID="bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.358989 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xdzq9"] Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.370251 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xdzq9"] Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.378078 4820 scope.go:117] "RemoveContainer" containerID="cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.417456 4820 scope.go:117] "RemoveContainer" containerID="fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931" Feb 01 15:13:12 crc kubenswrapper[4820]: E0201 15:13:12.420574 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931\": container with ID starting with fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931 not found: ID does not exist" containerID="fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.420638 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931"} err="failed to get container status \"fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931\": rpc error: code = NotFound desc = could not find container \"fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931\": container with ID starting with fb41a209dba3105d023afb27c0aa70542248eec32359e6386adfe7bd10451931 not found: ID does not exist" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.420669 4820 scope.go:117] "RemoveContainer" containerID="bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146" Feb 01 15:13:12 crc kubenswrapper[4820]: E0201 15:13:12.421160 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146\": container with ID starting with bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146 not found: ID does not exist" containerID="bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.421194 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146"} err="failed to get container status \"bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146\": rpc error: code = NotFound desc = could not find container \"bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146\": container with ID starting with bdcf6149e894eb1a8488d4a95aad27882e5f7269abf4a44fc9ca1c8e12adb146 not found: ID does not exist" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.421217 4820 scope.go:117] "RemoveContainer" containerID="cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0" Feb 01 15:13:12 crc kubenswrapper[4820]: E0201 15:13:12.421469 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0\": container with ID starting with cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0 not found: ID does not exist" containerID="cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0" Feb 01 15:13:12 crc kubenswrapper[4820]: I0201 15:13:12.421495 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0"} err="failed to get container status \"cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0\": rpc error: code = NotFound desc = could not find container \"cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0\": container with ID starting with cacb9e8b44d06fe6a73d7723cabba051f54c636a982c1c8cba2e12a9e4e189f0 not found: ID does not exist" Feb 01 15:13:13 crc kubenswrapper[4820]: I0201 15:13:13.210935 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" path="/var/lib/kubelet/pods/42cc0bea-76f0-4c25-959a-49c9f1399906/volumes" Feb 01 15:13:19 crc kubenswrapper[4820]: I0201 15:13:19.225204 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:13:19 crc kubenswrapper[4820]: E0201 15:13:19.226955 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:13:30 crc kubenswrapper[4820]: I0201 15:13:30.199178 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:13:30 crc kubenswrapper[4820]: E0201 15:13:30.199980 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:13:45 crc kubenswrapper[4820]: I0201 15:13:45.199968 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:13:45 crc kubenswrapper[4820]: E0201 15:13:45.201314 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:13:57 crc kubenswrapper[4820]: I0201 15:13:57.199435 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:13:57 crc kubenswrapper[4820]: E0201 15:13:57.200328 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:14:11 crc kubenswrapper[4820]: I0201 15:14:11.732860 4820 scope.go:117] "RemoveContainer" containerID="e3d513fee4c22326779615924705d06357ff59f7e252b594a9eec22c366211bc" Feb 01 15:14:12 crc kubenswrapper[4820]: I0201 15:14:12.199736 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:14:12 crc kubenswrapper[4820]: E0201 15:14:12.200548 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:14:25 crc kubenswrapper[4820]: I0201 15:14:25.202265 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:14:25 crc kubenswrapper[4820]: E0201 15:14:25.203247 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:14:38 crc kubenswrapper[4820]: I0201 15:14:38.200065 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:14:38 crc kubenswrapper[4820]: E0201 15:14:38.202090 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:14:52 crc kubenswrapper[4820]: I0201 15:14:52.198938 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:14:53 crc kubenswrapper[4820]: I0201 15:14:53.281846 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"623ff580406e12d397a144e12f18b3b08667bc7fd7a128bf11255d0933a72222"} Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.147348 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5"] Feb 01 15:15:00 crc kubenswrapper[4820]: E0201 15:15:00.148917 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerName="extract-content" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.148940 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerName="extract-content" Feb 01 15:15:00 crc kubenswrapper[4820]: E0201 15:15:00.148958 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="registry-server" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.148968 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="registry-server" Feb 01 15:15:00 crc kubenswrapper[4820]: E0201 15:15:00.149007 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerName="extract-utilities" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.149017 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerName="extract-utilities" Feb 01 15:15:00 crc kubenswrapper[4820]: E0201 15:15:00.149034 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="extract-utilities" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.149043 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="extract-utilities" Feb 01 15:15:00 crc kubenswrapper[4820]: E0201 15:15:00.149076 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerName="registry-server" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.149086 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerName="registry-server" Feb 01 15:15:00 crc kubenswrapper[4820]: E0201 15:15:00.149100 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="extract-content" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.149108 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="extract-content" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.149370 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f559dd05-941b-467c-9137-8ad66e1e8fb9" containerName="registry-server" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.149396 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="42cc0bea-76f0-4c25-959a-49c9f1399906" containerName="registry-server" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.150521 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.154898 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.155172 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.160303 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5"] Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.266809 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g5rp\" (UniqueName: \"kubernetes.io/projected/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-kube-api-access-4g5rp\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.266884 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-secret-volume\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.267432 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-config-volume\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.369507 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-config-volume\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.369605 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g5rp\" (UniqueName: \"kubernetes.io/projected/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-kube-api-access-4g5rp\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.369637 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-secret-volume\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.371908 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-config-volume\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.378854 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-secret-volume\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.401482 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g5rp\" (UniqueName: \"kubernetes.io/projected/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-kube-api-access-4g5rp\") pod \"collect-profiles-29499315-bjtq5\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:00 crc kubenswrapper[4820]: I0201 15:15:00.481023 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:01 crc kubenswrapper[4820]: I0201 15:15:01.049581 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5"] Feb 01 15:15:01 crc kubenswrapper[4820]: I0201 15:15:01.371929 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" event={"ID":"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6","Type":"ContainerStarted","Data":"016938d42a7aefbb96e682a9e88a110672b60b5fd5ae9e48ced29937bfc7998d"} Feb 01 15:15:01 crc kubenswrapper[4820]: I0201 15:15:01.372282 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" event={"ID":"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6","Type":"ContainerStarted","Data":"60816c3bcec6dde5babb4e76231c6424247950a2f71e4b14bb8e3fc339c3b7ef"} Feb 01 15:15:01 crc kubenswrapper[4820]: I0201 15:15:01.395647 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" podStartSLOduration=1.3956276349999999 podStartE2EDuration="1.395627635s" podCreationTimestamp="2026-02-01 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:15:01.391192308 +0000 UTC m=+3242.911558602" watchObservedRunningTime="2026-02-01 15:15:01.395627635 +0000 UTC m=+3242.915993919" Feb 01 15:15:02 crc kubenswrapper[4820]: I0201 15:15:02.381232 4820 generic.go:334] "Generic (PLEG): container finished" podID="ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6" containerID="016938d42a7aefbb96e682a9e88a110672b60b5fd5ae9e48ced29937bfc7998d" exitCode=0 Feb 01 15:15:02 crc kubenswrapper[4820]: I0201 15:15:02.382396 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" event={"ID":"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6","Type":"ContainerDied","Data":"016938d42a7aefbb96e682a9e88a110672b60b5fd5ae9e48ced29937bfc7998d"} Feb 01 15:15:03 crc kubenswrapper[4820]: I0201 15:15:03.791333 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:03 crc kubenswrapper[4820]: I0201 15:15:03.943456 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-config-volume\") pod \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " Feb 01 15:15:03 crc kubenswrapper[4820]: I0201 15:15:03.943734 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-secret-volume\") pod \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " Feb 01 15:15:03 crc kubenswrapper[4820]: I0201 15:15:03.943770 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g5rp\" (UniqueName: \"kubernetes.io/projected/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-kube-api-access-4g5rp\") pod \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\" (UID: \"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6\") " Feb 01 15:15:03 crc kubenswrapper[4820]: I0201 15:15:03.944298 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6" (UID: "ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:15:03 crc kubenswrapper[4820]: I0201 15:15:03.949707 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6" (UID: "ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:15:03 crc kubenswrapper[4820]: I0201 15:15:03.950343 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-kube-api-access-4g5rp" (OuterVolumeSpecName: "kube-api-access-4g5rp") pod "ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6" (UID: "ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6"). InnerVolumeSpecName "kube-api-access-4g5rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:15:04 crc kubenswrapper[4820]: I0201 15:15:04.046358 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 15:15:04 crc kubenswrapper[4820]: I0201 15:15:04.046393 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g5rp\" (UniqueName: \"kubernetes.io/projected/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-kube-api-access-4g5rp\") on node \"crc\" DevicePath \"\"" Feb 01 15:15:04 crc kubenswrapper[4820]: I0201 15:15:04.046402 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 15:15:04 crc kubenswrapper[4820]: I0201 15:15:04.404017 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" Feb 01 15:15:04 crc kubenswrapper[4820]: I0201 15:15:04.404013 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499315-bjtq5" event={"ID":"ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6","Type":"ContainerDied","Data":"60816c3bcec6dde5babb4e76231c6424247950a2f71e4b14bb8e3fc339c3b7ef"} Feb 01 15:15:04 crc kubenswrapper[4820]: I0201 15:15:04.404445 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60816c3bcec6dde5babb4e76231c6424247950a2f71e4b14bb8e3fc339c3b7ef" Feb 01 15:15:04 crc kubenswrapper[4820]: I0201 15:15:04.470345 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn"] Feb 01 15:15:04 crc kubenswrapper[4820]: I0201 15:15:04.478494 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499270-bqfsn"] Feb 01 15:15:05 crc kubenswrapper[4820]: I0201 15:15:05.210740 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efd4961-21a3-451a-aca3-f32bd9e1d045" path="/var/lib/kubelet/pods/3efd4961-21a3-451a-aca3-f32bd9e1d045/volumes" Feb 01 15:15:12 crc kubenswrapper[4820]: I0201 15:15:12.014793 4820 scope.go:117] "RemoveContainer" containerID="95a8b4c937921840549b43342aabb09eaa31b74f1f372f8ac308af936f9c2472" Feb 01 15:17:01 crc kubenswrapper[4820]: I0201 15:17:01.066012 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-3bac-account-create-update-g2ggq"] Feb 01 15:17:01 crc kubenswrapper[4820]: I0201 15:17:01.079151 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-7b9z7"] Feb 01 15:17:01 crc kubenswrapper[4820]: I0201 15:17:01.091318 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-3bac-account-create-update-g2ggq"] Feb 01 15:17:01 crc kubenswrapper[4820]: I0201 15:17:01.100311 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-7b9z7"] Feb 01 15:17:01 crc kubenswrapper[4820]: I0201 15:17:01.234338 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f978a2-b4d7-4e2c-83f1-41778effd23c" path="/var/lib/kubelet/pods/09f978a2-b4d7-4e2c-83f1-41778effd23c/volumes" Feb 01 15:17:01 crc kubenswrapper[4820]: I0201 15:17:01.235331 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7be6fa9-d20e-4b1c-82c4-4a13ddde938e" path="/var/lib/kubelet/pods/c7be6fa9-d20e-4b1c-82c4-4a13ddde938e/volumes" Feb 01 15:17:12 crc kubenswrapper[4820]: I0201 15:17:12.111626 4820 scope.go:117] "RemoveContainer" containerID="53a286d16a6a50f100e976574934616e8dd87993d88d1ee5c1bf3d4365b9c67b" Feb 01 15:17:12 crc kubenswrapper[4820]: I0201 15:17:12.155626 4820 scope.go:117] "RemoveContainer" containerID="d897d2b03ec08336250a7e9f2ac34f98a958cdcf4339a51c3f17e1923c004a95" Feb 01 15:17:19 crc kubenswrapper[4820]: I0201 15:17:19.242012 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:17:19 crc kubenswrapper[4820]: I0201 15:17:19.242626 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.512349 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dqwf"] Feb 01 15:17:23 crc kubenswrapper[4820]: E0201 15:17:23.514661 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6" containerName="collect-profiles" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.514767 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6" containerName="collect-profiles" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.515056 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd42c94-bd52-4bca-aa4a-fa9291d1c2d6" containerName="collect-profiles" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.520444 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.531654 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dqwf"] Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.554384 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfwtl\" (UniqueName: \"kubernetes.io/projected/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-kube-api-access-tfwtl\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.554606 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-utilities\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.554636 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-catalog-content\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.656112 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-catalog-content\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.656201 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfwtl\" (UniqueName: \"kubernetes.io/projected/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-kube-api-access-tfwtl\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.656352 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-utilities\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.656801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-utilities\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.656922 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-catalog-content\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.685473 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfwtl\" (UniqueName: \"kubernetes.io/projected/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-kube-api-access-tfwtl\") pod \"redhat-operators-4dqwf\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:23 crc kubenswrapper[4820]: I0201 15:17:23.842610 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:24 crc kubenswrapper[4820]: I0201 15:17:24.317330 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dqwf"] Feb 01 15:17:24 crc kubenswrapper[4820]: I0201 15:17:24.773707 4820 generic.go:334] "Generic (PLEG): container finished" podID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerID="ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455" exitCode=0 Feb 01 15:17:24 crc kubenswrapper[4820]: I0201 15:17:24.773753 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqwf" event={"ID":"7322a5a9-d67c-45be-8a3d-43a9f4232b6d","Type":"ContainerDied","Data":"ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455"} Feb 01 15:17:24 crc kubenswrapper[4820]: I0201 15:17:24.773781 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqwf" event={"ID":"7322a5a9-d67c-45be-8a3d-43a9f4232b6d","Type":"ContainerStarted","Data":"1e685b53bcc09df46f22db611fc2d1b133ab29e8749469b29f77238e01c1ac6c"} Feb 01 15:17:24 crc kubenswrapper[4820]: I0201 15:17:24.775560 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 15:17:25 crc kubenswrapper[4820]: I0201 15:17:25.784900 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqwf" event={"ID":"7322a5a9-d67c-45be-8a3d-43a9f4232b6d","Type":"ContainerStarted","Data":"89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42"} Feb 01 15:17:27 crc kubenswrapper[4820]: I0201 15:17:27.803427 4820 generic.go:334] "Generic (PLEG): container finished" podID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerID="89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42" exitCode=0 Feb 01 15:17:27 crc kubenswrapper[4820]: I0201 15:17:27.803470 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqwf" event={"ID":"7322a5a9-d67c-45be-8a3d-43a9f4232b6d","Type":"ContainerDied","Data":"89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42"} Feb 01 15:17:28 crc kubenswrapper[4820]: I0201 15:17:28.818468 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqwf" event={"ID":"7322a5a9-d67c-45be-8a3d-43a9f4232b6d","Type":"ContainerStarted","Data":"5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5"} Feb 01 15:17:28 crc kubenswrapper[4820]: I0201 15:17:28.840396 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dqwf" podStartSLOduration=2.34356716 podStartE2EDuration="5.840374166s" podCreationTimestamp="2026-02-01 15:17:23 +0000 UTC" firstStartedPulling="2026-02-01 15:17:24.77526672 +0000 UTC m=+3386.295640074" lastFinishedPulling="2026-02-01 15:17:28.272080786 +0000 UTC m=+3389.792447080" observedRunningTime="2026-02-01 15:17:28.835547438 +0000 UTC m=+3390.355913722" watchObservedRunningTime="2026-02-01 15:17:28.840374166 +0000 UTC m=+3390.360740450" Feb 01 15:17:33 crc kubenswrapper[4820]: I0201 15:17:33.843513 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:33 crc kubenswrapper[4820]: I0201 15:17:33.844161 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:34 crc kubenswrapper[4820]: I0201 15:17:34.926383 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dqwf" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="registry-server" probeResult="failure" output=< Feb 01 15:17:34 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 01 15:17:34 crc kubenswrapper[4820]: > Feb 01 15:17:37 crc kubenswrapper[4820]: I0201 15:17:37.057539 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-ptbpt"] Feb 01 15:17:37 crc kubenswrapper[4820]: I0201 15:17:37.067317 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-ptbpt"] Feb 01 15:17:37 crc kubenswrapper[4820]: I0201 15:17:37.224467 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052" path="/var/lib/kubelet/pods/d62afd6d-5fbd-4ca5-bff0-c5f4a5e01052/volumes" Feb 01 15:17:43 crc kubenswrapper[4820]: I0201 15:17:43.890798 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:43 crc kubenswrapper[4820]: I0201 15:17:43.945723 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:44 crc kubenswrapper[4820]: I0201 15:17:44.142243 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dqwf"] Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.006731 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dqwf" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="registry-server" containerID="cri-o://5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5" gracePeriod=2 Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.542581 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.726955 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfwtl\" (UniqueName: \"kubernetes.io/projected/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-kube-api-access-tfwtl\") pod \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.727073 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-catalog-content\") pod \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.727236 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-utilities\") pod \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\" (UID: \"7322a5a9-d67c-45be-8a3d-43a9f4232b6d\") " Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.728045 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-utilities" (OuterVolumeSpecName: "utilities") pod "7322a5a9-d67c-45be-8a3d-43a9f4232b6d" (UID: "7322a5a9-d67c-45be-8a3d-43a9f4232b6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.734060 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-kube-api-access-tfwtl" (OuterVolumeSpecName: "kube-api-access-tfwtl") pod "7322a5a9-d67c-45be-8a3d-43a9f4232b6d" (UID: "7322a5a9-d67c-45be-8a3d-43a9f4232b6d"). InnerVolumeSpecName "kube-api-access-tfwtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.829100 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.829129 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfwtl\" (UniqueName: \"kubernetes.io/projected/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-kube-api-access-tfwtl\") on node \"crc\" DevicePath \"\"" Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.840724 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7322a5a9-d67c-45be-8a3d-43a9f4232b6d" (UID: "7322a5a9-d67c-45be-8a3d-43a9f4232b6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:17:45 crc kubenswrapper[4820]: I0201 15:17:45.930455 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7322a5a9-d67c-45be-8a3d-43a9f4232b6d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.015419 4820 generic.go:334] "Generic (PLEG): container finished" podID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerID="5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5" exitCode=0 Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.015472 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqwf" event={"ID":"7322a5a9-d67c-45be-8a3d-43a9f4232b6d","Type":"ContainerDied","Data":"5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5"} Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.015507 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqwf" event={"ID":"7322a5a9-d67c-45be-8a3d-43a9f4232b6d","Type":"ContainerDied","Data":"1e685b53bcc09df46f22db611fc2d1b133ab29e8749469b29f77238e01c1ac6c"} Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.015529 4820 scope.go:117] "RemoveContainer" containerID="5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.015551 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqwf" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.035078 4820 scope.go:117] "RemoveContainer" containerID="89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.056888 4820 scope.go:117] "RemoveContainer" containerID="ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.060636 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dqwf"] Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.070574 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dqwf"] Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.108050 4820 scope.go:117] "RemoveContainer" containerID="5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5" Feb 01 15:17:46 crc kubenswrapper[4820]: E0201 15:17:46.109420 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5\": container with ID starting with 5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5 not found: ID does not exist" containerID="5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.109452 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5"} err="failed to get container status \"5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5\": rpc error: code = NotFound desc = could not find container \"5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5\": container with ID starting with 5a84edb68e34afe49cecfed9e2df91f1172402502aeb6b5fab054c5824d586a5 not found: ID does not exist" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.109473 4820 scope.go:117] "RemoveContainer" containerID="89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42" Feb 01 15:17:46 crc kubenswrapper[4820]: E0201 15:17:46.109786 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42\": container with ID starting with 89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42 not found: ID does not exist" containerID="89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.109815 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42"} err="failed to get container status \"89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42\": rpc error: code = NotFound desc = could not find container \"89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42\": container with ID starting with 89372cb8b5ba89c94552331ba780c1a0f1448acf77410f5a9d3ebfc549a20f42 not found: ID does not exist" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.109835 4820 scope.go:117] "RemoveContainer" containerID="ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455" Feb 01 15:17:46 crc kubenswrapper[4820]: E0201 15:17:46.110268 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455\": container with ID starting with ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455 not found: ID does not exist" containerID="ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455" Feb 01 15:17:46 crc kubenswrapper[4820]: I0201 15:17:46.110301 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455"} err="failed to get container status \"ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455\": rpc error: code = NotFound desc = could not find container \"ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455\": container with ID starting with ebeda2f81d4ffae0d40192c053500c156cb7b7708a431e2843516d8bfc2ac455 not found: ID does not exist" Feb 01 15:17:47 crc kubenswrapper[4820]: I0201 15:17:47.213309 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" path="/var/lib/kubelet/pods/7322a5a9-d67c-45be-8a3d-43a9f4232b6d/volumes" Feb 01 15:17:49 crc kubenswrapper[4820]: I0201 15:17:49.241957 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:17:49 crc kubenswrapper[4820]: I0201 15:17:49.243045 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:18:12 crc kubenswrapper[4820]: I0201 15:18:12.275413 4820 scope.go:117] "RemoveContainer" containerID="4d29f0891610f6036af7cfcb6f99b67fb2175096a87160e5f794ac03e8827d19" Feb 01 15:18:19 crc kubenswrapper[4820]: I0201 15:18:19.242374 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:18:19 crc kubenswrapper[4820]: I0201 15:18:19.242931 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:18:19 crc kubenswrapper[4820]: I0201 15:18:19.242981 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 15:18:19 crc kubenswrapper[4820]: I0201 15:18:19.243809 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"623ff580406e12d397a144e12f18b3b08667bc7fd7a128bf11255d0933a72222"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 15:18:19 crc kubenswrapper[4820]: I0201 15:18:19.243891 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://623ff580406e12d397a144e12f18b3b08667bc7fd7a128bf11255d0933a72222" gracePeriod=600 Feb 01 15:18:20 crc kubenswrapper[4820]: I0201 15:18:20.340687 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="623ff580406e12d397a144e12f18b3b08667bc7fd7a128bf11255d0933a72222" exitCode=0 Feb 01 15:18:20 crc kubenswrapper[4820]: I0201 15:18:20.340733 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"623ff580406e12d397a144e12f18b3b08667bc7fd7a128bf11255d0933a72222"} Feb 01 15:18:20 crc kubenswrapper[4820]: I0201 15:18:20.342389 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91"} Feb 01 15:18:20 crc kubenswrapper[4820]: I0201 15:18:20.342468 4820 scope.go:117] "RemoveContainer" containerID="a1670bd351c873d4c09d5d27e4c51720b4f3470196be4c4324d66031590d0589" Feb 01 15:19:30 crc kubenswrapper[4820]: I0201 15:19:30.990855 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-glsk6"] Feb 01 15:19:30 crc kubenswrapper[4820]: E0201 15:19:30.993572 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="extract-content" Feb 01 15:19:30 crc kubenswrapper[4820]: I0201 15:19:30.993593 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="extract-content" Feb 01 15:19:30 crc kubenswrapper[4820]: E0201 15:19:30.993613 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="extract-utilities" Feb 01 15:19:30 crc kubenswrapper[4820]: I0201 15:19:30.993623 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="extract-utilities" Feb 01 15:19:30 crc kubenswrapper[4820]: E0201 15:19:30.993684 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="registry-server" Feb 01 15:19:30 crc kubenswrapper[4820]: I0201 15:19:30.993696 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="registry-server" Feb 01 15:19:30 crc kubenswrapper[4820]: I0201 15:19:30.993928 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7322a5a9-d67c-45be-8a3d-43a9f4232b6d" containerName="registry-server" Feb 01 15:19:30 crc kubenswrapper[4820]: I0201 15:19:30.995510 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.018227 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glsk6"] Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.092053 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9crf\" (UniqueName: \"kubernetes.io/projected/8fab2415-0f72-40b7-9504-c6518beb38f0-kube-api-access-x9crf\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.092122 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-catalog-content\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.092222 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-utilities\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.194179 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9crf\" (UniqueName: \"kubernetes.io/projected/8fab2415-0f72-40b7-9504-c6518beb38f0-kube-api-access-x9crf\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.194246 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-catalog-content\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.194280 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-utilities\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.194945 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-catalog-content\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.194976 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-utilities\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.217216 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9crf\" (UniqueName: \"kubernetes.io/projected/8fab2415-0f72-40b7-9504-c6518beb38f0-kube-api-access-x9crf\") pod \"redhat-marketplace-glsk6\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.336378 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:31 crc kubenswrapper[4820]: I0201 15:19:31.888795 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glsk6"] Feb 01 15:19:32 crc kubenswrapper[4820]: I0201 15:19:32.043108 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glsk6" event={"ID":"8fab2415-0f72-40b7-9504-c6518beb38f0","Type":"ContainerStarted","Data":"541d7dc19762e130fa6db0c7284978a7ded6624f64f62a94b1c4d34359312057"} Feb 01 15:19:32 crc kubenswrapper[4820]: E0201 15:19:32.274344 4820 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fab2415_0f72_40b7_9504_c6518beb38f0.slice/crio-2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fab2415_0f72_40b7_9504_c6518beb38f0.slice/crio-conmon-2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950.scope\": RecentStats: unable to find data in memory cache]" Feb 01 15:19:33 crc kubenswrapper[4820]: I0201 15:19:33.052768 4820 generic.go:334] "Generic (PLEG): container finished" podID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerID="2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950" exitCode=0 Feb 01 15:19:33 crc kubenswrapper[4820]: I0201 15:19:33.052820 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glsk6" event={"ID":"8fab2415-0f72-40b7-9504-c6518beb38f0","Type":"ContainerDied","Data":"2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950"} Feb 01 15:19:35 crc kubenswrapper[4820]: I0201 15:19:35.077785 4820 generic.go:334] "Generic (PLEG): container finished" podID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerID="a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e" exitCode=0 Feb 01 15:19:35 crc kubenswrapper[4820]: I0201 15:19:35.077949 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glsk6" event={"ID":"8fab2415-0f72-40b7-9504-c6518beb38f0","Type":"ContainerDied","Data":"a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e"} Feb 01 15:19:36 crc kubenswrapper[4820]: I0201 15:19:36.087173 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glsk6" event={"ID":"8fab2415-0f72-40b7-9504-c6518beb38f0","Type":"ContainerStarted","Data":"c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931"} Feb 01 15:19:36 crc kubenswrapper[4820]: I0201 15:19:36.115435 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-glsk6" podStartSLOduration=3.639371788 podStartE2EDuration="6.115412707s" podCreationTimestamp="2026-02-01 15:19:30 +0000 UTC" firstStartedPulling="2026-02-01 15:19:33.055862674 +0000 UTC m=+3514.576228978" lastFinishedPulling="2026-02-01 15:19:35.531903613 +0000 UTC m=+3517.052269897" observedRunningTime="2026-02-01 15:19:36.104972935 +0000 UTC m=+3517.625339249" watchObservedRunningTime="2026-02-01 15:19:36.115412707 +0000 UTC m=+3517.635778981" Feb 01 15:19:41 crc kubenswrapper[4820]: I0201 15:19:41.337411 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:41 crc kubenswrapper[4820]: I0201 15:19:41.340244 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:41 crc kubenswrapper[4820]: I0201 15:19:41.386175 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:42 crc kubenswrapper[4820]: I0201 15:19:42.212180 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:42 crc kubenswrapper[4820]: I0201 15:19:42.267397 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glsk6"] Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.165941 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-glsk6" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerName="registry-server" containerID="cri-o://c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931" gracePeriod=2 Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.639084 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.751727 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9crf\" (UniqueName: \"kubernetes.io/projected/8fab2415-0f72-40b7-9504-c6518beb38f0-kube-api-access-x9crf\") pod \"8fab2415-0f72-40b7-9504-c6518beb38f0\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.751836 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-catalog-content\") pod \"8fab2415-0f72-40b7-9504-c6518beb38f0\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.751966 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-utilities\") pod \"8fab2415-0f72-40b7-9504-c6518beb38f0\" (UID: \"8fab2415-0f72-40b7-9504-c6518beb38f0\") " Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.753083 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-utilities" (OuterVolumeSpecName: "utilities") pod "8fab2415-0f72-40b7-9504-c6518beb38f0" (UID: "8fab2415-0f72-40b7-9504-c6518beb38f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.777316 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fab2415-0f72-40b7-9504-c6518beb38f0-kube-api-access-x9crf" (OuterVolumeSpecName: "kube-api-access-x9crf") pod "8fab2415-0f72-40b7-9504-c6518beb38f0" (UID: "8fab2415-0f72-40b7-9504-c6518beb38f0"). InnerVolumeSpecName "kube-api-access-x9crf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.791818 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fab2415-0f72-40b7-9504-c6518beb38f0" (UID: "8fab2415-0f72-40b7-9504-c6518beb38f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.853965 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.854001 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9crf\" (UniqueName: \"kubernetes.io/projected/8fab2415-0f72-40b7-9504-c6518beb38f0-kube-api-access-x9crf\") on node \"crc\" DevicePath \"\"" Feb 01 15:19:44 crc kubenswrapper[4820]: I0201 15:19:44.854012 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fab2415-0f72-40b7-9504-c6518beb38f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.176910 4820 generic.go:334] "Generic (PLEG): container finished" podID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerID="c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931" exitCode=0 Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.176995 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glsk6" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.177045 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glsk6" event={"ID":"8fab2415-0f72-40b7-9504-c6518beb38f0","Type":"ContainerDied","Data":"c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931"} Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.178792 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glsk6" event={"ID":"8fab2415-0f72-40b7-9504-c6518beb38f0","Type":"ContainerDied","Data":"541d7dc19762e130fa6db0c7284978a7ded6624f64f62a94b1c4d34359312057"} Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.178841 4820 scope.go:117] "RemoveContainer" containerID="c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.202638 4820 scope.go:117] "RemoveContainer" containerID="a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.251650 4820 scope.go:117] "RemoveContainer" containerID="2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.261574 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glsk6"] Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.273463 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-glsk6"] Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.306068 4820 scope.go:117] "RemoveContainer" containerID="c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931" Feb 01 15:19:45 crc kubenswrapper[4820]: E0201 15:19:45.307113 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931\": container with ID starting with c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931 not found: ID does not exist" containerID="c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.307163 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931"} err="failed to get container status \"c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931\": rpc error: code = NotFound desc = could not find container \"c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931\": container with ID starting with c804bec8148bd1803400b977aa43506ddb448959ee64a911c23a0dc3834b6931 not found: ID does not exist" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.307194 4820 scope.go:117] "RemoveContainer" containerID="a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e" Feb 01 15:19:45 crc kubenswrapper[4820]: E0201 15:19:45.307659 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e\": container with ID starting with a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e not found: ID does not exist" containerID="a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.307679 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e"} err="failed to get container status \"a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e\": rpc error: code = NotFound desc = could not find container \"a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e\": container with ID starting with a54d54e8c9b307ff306d3b74a52eca016a76c5a91abc9865ceebedcf7daeaa7e not found: ID does not exist" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.307691 4820 scope.go:117] "RemoveContainer" containerID="2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950" Feb 01 15:19:45 crc kubenswrapper[4820]: E0201 15:19:45.307906 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950\": container with ID starting with 2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950 not found: ID does not exist" containerID="2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950" Feb 01 15:19:45 crc kubenswrapper[4820]: I0201 15:19:45.307923 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950"} err="failed to get container status \"2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950\": rpc error: code = NotFound desc = could not find container \"2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950\": container with ID starting with 2fe95a8f29b930723bfee5362dd05ae7286396bdb628f6f4adb6096dca836950 not found: ID does not exist" Feb 01 15:19:47 crc kubenswrapper[4820]: I0201 15:19:47.211510 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" path="/var/lib/kubelet/pods/8fab2415-0f72-40b7-9504-c6518beb38f0/volumes" Feb 01 15:20:19 crc kubenswrapper[4820]: I0201 15:20:19.242124 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:20:19 crc kubenswrapper[4820]: I0201 15:20:19.242942 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:20:49 crc kubenswrapper[4820]: I0201 15:20:49.242553 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:20:49 crc kubenswrapper[4820]: I0201 15:20:49.243231 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:21:19 crc kubenswrapper[4820]: I0201 15:21:19.242488 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:21:19 crc kubenswrapper[4820]: I0201 15:21:19.243136 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:21:19 crc kubenswrapper[4820]: I0201 15:21:19.243193 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 15:21:19 crc kubenswrapper[4820]: I0201 15:21:19.244087 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 15:21:19 crc kubenswrapper[4820]: I0201 15:21:19.244153 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" gracePeriod=600 Feb 01 15:21:19 crc kubenswrapper[4820]: E0201 15:21:19.370108 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:21:20 crc kubenswrapper[4820]: I0201 15:21:20.047409 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" exitCode=0 Feb 01 15:21:20 crc kubenswrapper[4820]: I0201 15:21:20.047508 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91"} Feb 01 15:21:20 crc kubenswrapper[4820]: I0201 15:21:20.047685 4820 scope.go:117] "RemoveContainer" containerID="623ff580406e12d397a144e12f18b3b08667bc7fd7a128bf11255d0933a72222" Feb 01 15:21:20 crc kubenswrapper[4820]: I0201 15:21:20.048134 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:21:20 crc kubenswrapper[4820]: E0201 15:21:20.048619 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:21:30 crc kubenswrapper[4820]: I0201 15:21:30.199685 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:21:30 crc kubenswrapper[4820]: E0201 15:21:30.201357 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:21:44 crc kubenswrapper[4820]: I0201 15:21:44.199661 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:21:44 crc kubenswrapper[4820]: E0201 15:21:44.200931 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:21:59 crc kubenswrapper[4820]: I0201 15:21:59.212755 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:21:59 crc kubenswrapper[4820]: E0201 15:21:59.213909 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:22:11 crc kubenswrapper[4820]: I0201 15:22:11.199675 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:22:11 crc kubenswrapper[4820]: E0201 15:22:11.201224 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:22:25 crc kubenswrapper[4820]: I0201 15:22:25.200037 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:22:25 crc kubenswrapper[4820]: E0201 15:22:25.200962 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:22:27 crc kubenswrapper[4820]: I0201 15:22:27.715227 4820 generic.go:334] "Generic (PLEG): container finished" podID="9b040975-c603-4b7a-875c-c372ddb0e24e" containerID="71364fb429f365da3376852eac69aacfccf731c8004e5a699fa622bd359ffaf6" exitCode=1 Feb 01 15:22:27 crc kubenswrapper[4820]: I0201 15:22:27.715377 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9b040975-c603-4b7a-875c-c372ddb0e24e","Type":"ContainerDied","Data":"71364fb429f365da3376852eac69aacfccf731c8004e5a699fa622bd359ffaf6"} Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.172823 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.355325 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ssh-key\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.355752 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-temporary\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.355831 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvhlt\" (UniqueName: \"kubernetes.io/projected/9b040975-c603-4b7a-875c-c372ddb0e24e-kube-api-access-wvhlt\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.355905 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.356040 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-config-data\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.356115 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.356237 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-workdir\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.356343 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config-secret\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.356396 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ca-certs\") pod \"9b040975-c603-4b7a-875c-c372ddb0e24e\" (UID: \"9b040975-c603-4b7a-875c-c372ddb0e24e\") " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.356723 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.357006 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-config-data" (OuterVolumeSpecName: "config-data") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.357538 4820 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.357561 4820 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.360703 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.361219 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b040975-c603-4b7a-875c-c372ddb0e24e-kube-api-access-wvhlt" (OuterVolumeSpecName: "kube-api-access-wvhlt") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "kube-api-access-wvhlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.364868 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.382775 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.398843 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.403120 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.404539 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9b040975-c603-4b7a-875c-c372ddb0e24e" (UID: "9b040975-c603-4b7a-875c-c372ddb0e24e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.459772 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.459818 4820 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.459830 4820 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b040975-c603-4b7a-875c-c372ddb0e24e-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.459839 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvhlt\" (UniqueName: \"kubernetes.io/projected/9b040975-c603-4b7a-875c-c372ddb0e24e-kube-api-access-wvhlt\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.459849 4820 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b040975-c603-4b7a-875c-c372ddb0e24e-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.459885 4820 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.459896 4820 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9b040975-c603-4b7a-875c-c372ddb0e24e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.479665 4820 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.561399 4820 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.735416 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9b040975-c603-4b7a-875c-c372ddb0e24e","Type":"ContainerDied","Data":"f4f922a60716e2ef1f0eda797c14245a4d1d5cf8cf3f49bb9413729ab3dd7ce1"} Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.735461 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4f922a60716e2ef1f0eda797c14245a4d1d5cf8cf3f49bb9413729ab3dd7ce1" Feb 01 15:22:29 crc kubenswrapper[4820]: I0201 15:22:29.735534 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.123940 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 01 15:22:32 crc kubenswrapper[4820]: E0201 15:22:32.124771 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerName="extract-content" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.124785 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerName="extract-content" Feb 01 15:22:32 crc kubenswrapper[4820]: E0201 15:22:32.124812 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerName="extract-utilities" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.124822 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerName="extract-utilities" Feb 01 15:22:32 crc kubenswrapper[4820]: E0201 15:22:32.124839 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b040975-c603-4b7a-875c-c372ddb0e24e" containerName="tempest-tests-tempest-tests-runner" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.124845 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b040975-c603-4b7a-875c-c372ddb0e24e" containerName="tempest-tests-tempest-tests-runner" Feb 01 15:22:32 crc kubenswrapper[4820]: E0201 15:22:32.124861 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerName="registry-server" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.124867 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerName="registry-server" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.125210 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b040975-c603-4b7a-875c-c372ddb0e24e" containerName="tempest-tests-tempest-tests-runner" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.125237 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fab2415-0f72-40b7-9504-c6518beb38f0" containerName="registry-server" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.126002 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.135866 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.137192 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zqzp5" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.212943 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64854884-2922-44c0-8ae0-dc3e8d3e2b17\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.213137 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbpc\" (UniqueName: \"kubernetes.io/projected/64854884-2922-44c0-8ae0-dc3e8d3e2b17-kube-api-access-4vbpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64854884-2922-44c0-8ae0-dc3e8d3e2b17\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.314672 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vbpc\" (UniqueName: \"kubernetes.io/projected/64854884-2922-44c0-8ae0-dc3e8d3e2b17-kube-api-access-4vbpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64854884-2922-44c0-8ae0-dc3e8d3e2b17\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.314791 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64854884-2922-44c0-8ae0-dc3e8d3e2b17\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.315685 4820 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64854884-2922-44c0-8ae0-dc3e8d3e2b17\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.333048 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vbpc\" (UniqueName: \"kubernetes.io/projected/64854884-2922-44c0-8ae0-dc3e8d3e2b17-kube-api-access-4vbpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64854884-2922-44c0-8ae0-dc3e8d3e2b17\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.363801 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64854884-2922-44c0-8ae0-dc3e8d3e2b17\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.458229 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.979126 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 01 15:22:32 crc kubenswrapper[4820]: I0201 15:22:32.981606 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 15:22:33 crc kubenswrapper[4820]: I0201 15:22:33.789098 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"64854884-2922-44c0-8ae0-dc3e8d3e2b17","Type":"ContainerStarted","Data":"96060d108f926b11be18db53477d1d8cc70dc0cfaf84536ac17196150e095bbd"} Feb 01 15:22:34 crc kubenswrapper[4820]: I0201 15:22:34.800175 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"64854884-2922-44c0-8ae0-dc3e8d3e2b17","Type":"ContainerStarted","Data":"7069c4897e4867e825df5f67c646b5c4b0b376a9146a81df0103cc3cc638c814"} Feb 01 15:22:34 crc kubenswrapper[4820]: I0201 15:22:34.814484 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.842785182 podStartE2EDuration="2.814459611s" podCreationTimestamp="2026-02-01 15:22:32 +0000 UTC" firstStartedPulling="2026-02-01 15:22:32.981406343 +0000 UTC m=+3694.501772627" lastFinishedPulling="2026-02-01 15:22:33.953080732 +0000 UTC m=+3695.473447056" observedRunningTime="2026-02-01 15:22:34.813536218 +0000 UTC m=+3696.333902542" watchObservedRunningTime="2026-02-01 15:22:34.814459611 +0000 UTC m=+3696.334825895" Feb 01 15:22:40 crc kubenswrapper[4820]: I0201 15:22:40.199188 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:22:40 crc kubenswrapper[4820]: E0201 15:22:40.200005 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:22:51 crc kubenswrapper[4820]: I0201 15:22:51.199438 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:22:51 crc kubenswrapper[4820]: E0201 15:22:51.200731 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:23:04 crc kubenswrapper[4820]: I0201 15:23:04.198760 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:23:04 crc kubenswrapper[4820]: E0201 15:23:04.199664 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:23:15 crc kubenswrapper[4820]: I0201 15:23:15.199441 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:23:15 crc kubenswrapper[4820]: E0201 15:23:15.200550 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.062672 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cb2z4"] Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.065626 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.079541 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cb2z4"] Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.114104 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-catalog-content\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.114554 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmc2c\" (UniqueName: \"kubernetes.io/projected/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-kube-api-access-fmc2c\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.114605 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-utilities\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.216768 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-catalog-content\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.217029 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmc2c\" (UniqueName: \"kubernetes.io/projected/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-kube-api-access-fmc2c\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.217055 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-utilities\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.217688 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-utilities\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.217984 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-catalog-content\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.242004 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmc2c\" (UniqueName: \"kubernetes.io/projected/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-kube-api-access-fmc2c\") pod \"community-operators-cb2z4\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.422220 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:20 crc kubenswrapper[4820]: I0201 15:23:20.918734 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cb2z4"] Feb 01 15:23:21 crc kubenswrapper[4820]: I0201 15:23:21.279535 4820 generic.go:334] "Generic (PLEG): container finished" podID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerID="716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba" exitCode=0 Feb 01 15:23:21 crc kubenswrapper[4820]: I0201 15:23:21.279611 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2z4" event={"ID":"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe","Type":"ContainerDied","Data":"716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba"} Feb 01 15:23:21 crc kubenswrapper[4820]: I0201 15:23:21.280398 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2z4" event={"ID":"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe","Type":"ContainerStarted","Data":"a5c97070df818692f55c68695e5cb8e21ea9e2887376a1bdc122c9c41fe5e1ac"} Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.302177 4820 generic.go:334] "Generic (PLEG): container finished" podID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerID="9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c" exitCode=0 Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.302295 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2z4" event={"ID":"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe","Type":"ContainerDied","Data":"9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c"} Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.393727 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ng5b5/must-gather-hlvcm"] Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.395483 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.408275 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ng5b5"/"openshift-service-ca.crt" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.408712 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ng5b5"/"default-dockercfg-4jhgc" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.408848 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ng5b5"/"kube-root-ca.crt" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.412237 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ng5b5/must-gather-hlvcm"] Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.497608 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48tx\" (UniqueName: \"kubernetes.io/projected/e5777edd-acdc-4e8c-952c-fca23e3a1311-kube-api-access-s48tx\") pod \"must-gather-hlvcm\" (UID: \"e5777edd-acdc-4e8c-952c-fca23e3a1311\") " pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.497801 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e5777edd-acdc-4e8c-952c-fca23e3a1311-must-gather-output\") pod \"must-gather-hlvcm\" (UID: \"e5777edd-acdc-4e8c-952c-fca23e3a1311\") " pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.599990 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48tx\" (UniqueName: \"kubernetes.io/projected/e5777edd-acdc-4e8c-952c-fca23e3a1311-kube-api-access-s48tx\") pod \"must-gather-hlvcm\" (UID: \"e5777edd-acdc-4e8c-952c-fca23e3a1311\") " pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.600121 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e5777edd-acdc-4e8c-952c-fca23e3a1311-must-gather-output\") pod \"must-gather-hlvcm\" (UID: \"e5777edd-acdc-4e8c-952c-fca23e3a1311\") " pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.600602 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e5777edd-acdc-4e8c-952c-fca23e3a1311-must-gather-output\") pod \"must-gather-hlvcm\" (UID: \"e5777edd-acdc-4e8c-952c-fca23e3a1311\") " pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.620756 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48tx\" (UniqueName: \"kubernetes.io/projected/e5777edd-acdc-4e8c-952c-fca23e3a1311-kube-api-access-s48tx\") pod \"must-gather-hlvcm\" (UID: \"e5777edd-acdc-4e8c-952c-fca23e3a1311\") " pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:23:23 crc kubenswrapper[4820]: I0201 15:23:23.718601 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:23:24 crc kubenswrapper[4820]: W0201 15:23:24.244376 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5777edd_acdc_4e8c_952c_fca23e3a1311.slice/crio-3c2a1ac8a4847ed1a1b7f24f49bdbf88e236cd345f467c2da3c8c65363f0eb83 WatchSource:0}: Error finding container 3c2a1ac8a4847ed1a1b7f24f49bdbf88e236cd345f467c2da3c8c65363f0eb83: Status 404 returned error can't find the container with id 3c2a1ac8a4847ed1a1b7f24f49bdbf88e236cd345f467c2da3c8c65363f0eb83 Feb 01 15:23:24 crc kubenswrapper[4820]: I0201 15:23:24.264763 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ng5b5/must-gather-hlvcm"] Feb 01 15:23:24 crc kubenswrapper[4820]: I0201 15:23:24.314112 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2z4" event={"ID":"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe","Type":"ContainerStarted","Data":"4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9"} Feb 01 15:23:24 crc kubenswrapper[4820]: I0201 15:23:24.317301 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" event={"ID":"e5777edd-acdc-4e8c-952c-fca23e3a1311","Type":"ContainerStarted","Data":"3c2a1ac8a4847ed1a1b7f24f49bdbf88e236cd345f467c2da3c8c65363f0eb83"} Feb 01 15:23:24 crc kubenswrapper[4820]: I0201 15:23:24.340327 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cb2z4" podStartSLOduration=1.908041956 podStartE2EDuration="4.340308117s" podCreationTimestamp="2026-02-01 15:23:20 +0000 UTC" firstStartedPulling="2026-02-01 15:23:21.282037935 +0000 UTC m=+3742.802404219" lastFinishedPulling="2026-02-01 15:23:23.714304096 +0000 UTC m=+3745.234670380" observedRunningTime="2026-02-01 15:23:24.332519219 +0000 UTC m=+3745.852885503" watchObservedRunningTime="2026-02-01 15:23:24.340308117 +0000 UTC m=+3745.860674401" Feb 01 15:23:29 crc kubenswrapper[4820]: I0201 15:23:29.376816 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" event={"ID":"e5777edd-acdc-4e8c-952c-fca23e3a1311","Type":"ContainerStarted","Data":"fdb24e7439a3ccc84b1417ca57de763d18b8e201fe6f9b4252534d504c6378ef"} Feb 01 15:23:29 crc kubenswrapper[4820]: I0201 15:23:29.377688 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" event={"ID":"e5777edd-acdc-4e8c-952c-fca23e3a1311","Type":"ContainerStarted","Data":"1fe266e3e3eccdeacc76bb4b2478eddd4e69a5b9986e361958afed0eba17e795"} Feb 01 15:23:29 crc kubenswrapper[4820]: I0201 15:23:29.401602 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" podStartSLOduration=2.226728192 podStartE2EDuration="6.401556268s" podCreationTimestamp="2026-02-01 15:23:23 +0000 UTC" firstStartedPulling="2026-02-01 15:23:24.247960432 +0000 UTC m=+3745.768326716" lastFinishedPulling="2026-02-01 15:23:28.422788508 +0000 UTC m=+3749.943154792" observedRunningTime="2026-02-01 15:23:29.391535146 +0000 UTC m=+3750.911901430" watchObservedRunningTime="2026-02-01 15:23:29.401556268 +0000 UTC m=+3750.921922552" Feb 01 15:23:30 crc kubenswrapper[4820]: I0201 15:23:30.200100 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:23:30 crc kubenswrapper[4820]: E0201 15:23:30.200399 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:23:30 crc kubenswrapper[4820]: I0201 15:23:30.422823 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:30 crc kubenswrapper[4820]: I0201 15:23:30.422903 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:30 crc kubenswrapper[4820]: I0201 15:23:30.501903 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:31 crc kubenswrapper[4820]: I0201 15:23:31.442139 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:31 crc kubenswrapper[4820]: I0201 15:23:31.502799 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cb2z4"] Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.148747 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-q29k8"] Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.151611 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.335463 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2bqb\" (UniqueName: \"kubernetes.io/projected/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-kube-api-access-k2bqb\") pod \"crc-debug-q29k8\" (UID: \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\") " pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.335911 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-host\") pod \"crc-debug-q29k8\" (UID: \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\") " pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.415766 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cb2z4" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerName="registry-server" containerID="cri-o://4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9" gracePeriod=2 Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.437430 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2bqb\" (UniqueName: \"kubernetes.io/projected/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-kube-api-access-k2bqb\") pod \"crc-debug-q29k8\" (UID: \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\") " pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.437514 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-host\") pod \"crc-debug-q29k8\" (UID: \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\") " pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.437692 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-host\") pod \"crc-debug-q29k8\" (UID: \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\") " pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.458479 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2bqb\" (UniqueName: \"kubernetes.io/projected/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-kube-api-access-k2bqb\") pod \"crc-debug-q29k8\" (UID: \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\") " pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.477581 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:23:33 crc kubenswrapper[4820]: W0201 15:23:33.512808 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15554c7_8e38_4bbe_8ff2_e7ff10bedb40.slice/crio-cecf0845b0559b0901c6541b90745cd9e4065a7317480dd0aa4b79acd68604e6 WatchSource:0}: Error finding container cecf0845b0559b0901c6541b90745cd9e4065a7317480dd0aa4b79acd68604e6: Status 404 returned error can't find the container with id cecf0845b0559b0901c6541b90745cd9e4065a7317480dd0aa4b79acd68604e6 Feb 01 15:23:33 crc kubenswrapper[4820]: I0201 15:23:33.908284 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.051464 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-catalog-content\") pod \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.051719 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmc2c\" (UniqueName: \"kubernetes.io/projected/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-kube-api-access-fmc2c\") pod \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.051787 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-utilities\") pod \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\" (UID: \"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe\") " Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.052806 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-utilities" (OuterVolumeSpecName: "utilities") pod "d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" (UID: "d2bb32b5-79a4-4726-a341-8a27c0a0f6fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.056316 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-kube-api-access-fmc2c" (OuterVolumeSpecName: "kube-api-access-fmc2c") pod "d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" (UID: "d2bb32b5-79a4-4726-a341-8a27c0a0f6fe"). InnerVolumeSpecName "kube-api-access-fmc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.099280 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" (UID: "d2bb32b5-79a4-4726-a341-8a27c0a0f6fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.154333 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmc2c\" (UniqueName: \"kubernetes.io/projected/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-kube-api-access-fmc2c\") on node \"crc\" DevicePath \"\"" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.154375 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.154387 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.425682 4820 generic.go:334] "Generic (PLEG): container finished" podID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerID="4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9" exitCode=0 Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.425784 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2z4" event={"ID":"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe","Type":"ContainerDied","Data":"4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9"} Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.426116 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2z4" event={"ID":"d2bb32b5-79a4-4726-a341-8a27c0a0f6fe","Type":"ContainerDied","Data":"a5c97070df818692f55c68695e5cb8e21ea9e2887376a1bdc122c9c41fe5e1ac"} Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.426152 4820 scope.go:117] "RemoveContainer" containerID="4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.425798 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cb2z4" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.427574 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/crc-debug-q29k8" event={"ID":"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40","Type":"ContainerStarted","Data":"cecf0845b0559b0901c6541b90745cd9e4065a7317480dd0aa4b79acd68604e6"} Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.451190 4820 scope.go:117] "RemoveContainer" containerID="9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.456322 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cb2z4"] Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.481175 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cb2z4"] Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.484712 4820 scope.go:117] "RemoveContainer" containerID="716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.516132 4820 scope.go:117] "RemoveContainer" containerID="4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9" Feb 01 15:23:34 crc kubenswrapper[4820]: E0201 15:23:34.516774 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9\": container with ID starting with 4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9 not found: ID does not exist" containerID="4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.516839 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9"} err="failed to get container status \"4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9\": rpc error: code = NotFound desc = could not find container \"4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9\": container with ID starting with 4e68b4cd951bfef746ca913772f4ec51b93dc3d6e7963af9c46f33e373b231f9 not found: ID does not exist" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.516867 4820 scope.go:117] "RemoveContainer" containerID="9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c" Feb 01 15:23:34 crc kubenswrapper[4820]: E0201 15:23:34.517335 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c\": container with ID starting with 9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c not found: ID does not exist" containerID="9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.517381 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c"} err="failed to get container status \"9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c\": rpc error: code = NotFound desc = could not find container \"9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c\": container with ID starting with 9d02fc014d5d15676b40920876f7721787e1e2c31a8eca4fb68a0c79d46d095c not found: ID does not exist" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.517409 4820 scope.go:117] "RemoveContainer" containerID="716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba" Feb 01 15:23:34 crc kubenswrapper[4820]: E0201 15:23:34.517684 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba\": container with ID starting with 716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba not found: ID does not exist" containerID="716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba" Feb 01 15:23:34 crc kubenswrapper[4820]: I0201 15:23:34.517716 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba"} err="failed to get container status \"716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba\": rpc error: code = NotFound desc = could not find container \"716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba\": container with ID starting with 716f195dd678a6773ebc3fd2dcf16a9e3c94e79e1af027c83b382371b4feeeba not found: ID does not exist" Feb 01 15:23:34 crc kubenswrapper[4820]: E0201 15:23:34.852350 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:51482->38.102.83.73:46051: write tcp 38.102.83.73:51482->38.102.83.73:46051: write: broken pipe Feb 01 15:23:35 crc kubenswrapper[4820]: I0201 15:23:35.211254 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" path="/var/lib/kubelet/pods/d2bb32b5-79a4-4726-a341-8a27c0a0f6fe/volumes" Feb 01 15:23:43 crc kubenswrapper[4820]: I0201 15:23:43.199079 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:23:43 crc kubenswrapper[4820]: E0201 15:23:43.199846 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:23:44 crc kubenswrapper[4820]: I0201 15:23:44.544238 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/crc-debug-q29k8" event={"ID":"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40","Type":"ContainerStarted","Data":"fdcff894861229422e925786a0de4a197f08e66b75421e97365ce2e81486eb84"} Feb 01 15:23:44 crc kubenswrapper[4820]: I0201 15:23:44.566339 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ng5b5/crc-debug-q29k8" podStartSLOduration=1.29011835 podStartE2EDuration="11.566317604s" podCreationTimestamp="2026-02-01 15:23:33 +0000 UTC" firstStartedPulling="2026-02-01 15:23:33.51502714 +0000 UTC m=+3755.035393464" lastFinishedPulling="2026-02-01 15:23:43.791226434 +0000 UTC m=+3765.311592718" observedRunningTime="2026-02-01 15:23:44.554614171 +0000 UTC m=+3766.074980445" watchObservedRunningTime="2026-02-01 15:23:44.566317604 +0000 UTC m=+3766.086683908" Feb 01 15:23:55 crc kubenswrapper[4820]: I0201 15:23:55.202098 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:23:55 crc kubenswrapper[4820]: E0201 15:23:55.202875 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:24:10 crc kubenswrapper[4820]: I0201 15:24:10.200489 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:24:10 crc kubenswrapper[4820]: E0201 15:24:10.201441 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.761458 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6pqs"] Feb 01 15:24:16 crc kubenswrapper[4820]: E0201 15:24:16.762835 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerName="extract-utilities" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.762858 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerName="extract-utilities" Feb 01 15:24:16 crc kubenswrapper[4820]: E0201 15:24:16.762906 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerName="extract-content" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.762920 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerName="extract-content" Feb 01 15:24:16 crc kubenswrapper[4820]: E0201 15:24:16.762958 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerName="registry-server" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.762973 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerName="registry-server" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.763313 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bb32b5-79a4-4726-a341-8a27c0a0f6fe" containerName="registry-server" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.765666 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.815433 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6pqs"] Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.914732 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsg8w\" (UniqueName: \"kubernetes.io/projected/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-kube-api-access-gsg8w\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.914858 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-utilities\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:16 crc kubenswrapper[4820]: I0201 15:24:16.914911 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-catalog-content\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.017055 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsg8w\" (UniqueName: \"kubernetes.io/projected/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-kube-api-access-gsg8w\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.017132 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-utilities\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.017153 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-catalog-content\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.017699 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-catalog-content\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.017773 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-utilities\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.048410 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsg8w\" (UniqueName: \"kubernetes.io/projected/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-kube-api-access-gsg8w\") pod \"certified-operators-t6pqs\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.143816 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.652702 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6pqs"] Feb 01 15:24:17 crc kubenswrapper[4820]: W0201 15:24:17.848139 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa85845c_cfaf_4168_9b8c_d8b79b67f37c.slice/crio-97760b48fea74d9d9abd86492d8c75f55a620cd918633d112a70af82e39c7f25 WatchSource:0}: Error finding container 97760b48fea74d9d9abd86492d8c75f55a620cd918633d112a70af82e39c7f25: Status 404 returned error can't find the container with id 97760b48fea74d9d9abd86492d8c75f55a620cd918633d112a70af82e39c7f25 Feb 01 15:24:17 crc kubenswrapper[4820]: I0201 15:24:17.872769 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6pqs" event={"ID":"fa85845c-cfaf-4168-9b8c-d8b79b67f37c","Type":"ContainerStarted","Data":"97760b48fea74d9d9abd86492d8c75f55a620cd918633d112a70af82e39c7f25"} Feb 01 15:24:18 crc kubenswrapper[4820]: I0201 15:24:18.883044 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerID="19e157507210ccedb533fbce2ae7cf49d61d9e2a1df3a331d512308cfb86f9bf" exitCode=0 Feb 01 15:24:18 crc kubenswrapper[4820]: I0201 15:24:18.883287 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6pqs" event={"ID":"fa85845c-cfaf-4168-9b8c-d8b79b67f37c","Type":"ContainerDied","Data":"19e157507210ccedb533fbce2ae7cf49d61d9e2a1df3a331d512308cfb86f9bf"} Feb 01 15:24:19 crc kubenswrapper[4820]: I0201 15:24:19.892360 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6pqs" event={"ID":"fa85845c-cfaf-4168-9b8c-d8b79b67f37c","Type":"ContainerStarted","Data":"a06e762da5995739168e538284883904b74dce125b0a62cdebec3551d4113bad"} Feb 01 15:24:21 crc kubenswrapper[4820]: I0201 15:24:21.198618 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:24:21 crc kubenswrapper[4820]: E0201 15:24:21.199172 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:24:21 crc kubenswrapper[4820]: I0201 15:24:21.916777 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerID="a06e762da5995739168e538284883904b74dce125b0a62cdebec3551d4113bad" exitCode=0 Feb 01 15:24:21 crc kubenswrapper[4820]: I0201 15:24:21.916822 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6pqs" event={"ID":"fa85845c-cfaf-4168-9b8c-d8b79b67f37c","Type":"ContainerDied","Data":"a06e762da5995739168e538284883904b74dce125b0a62cdebec3551d4113bad"} Feb 01 15:24:22 crc kubenswrapper[4820]: I0201 15:24:22.925760 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6pqs" event={"ID":"fa85845c-cfaf-4168-9b8c-d8b79b67f37c","Type":"ContainerStarted","Data":"ff5edea11289235bbe4d7c69704f625431020c33d647b0102079285e1f456855"} Feb 01 15:24:22 crc kubenswrapper[4820]: I0201 15:24:22.947450 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6pqs" podStartSLOduration=3.478417531 podStartE2EDuration="6.947433404s" podCreationTimestamp="2026-02-01 15:24:16 +0000 UTC" firstStartedPulling="2026-02-01 15:24:18.884705551 +0000 UTC m=+3800.405071845" lastFinishedPulling="2026-02-01 15:24:22.353721434 +0000 UTC m=+3803.874087718" observedRunningTime="2026-02-01 15:24:22.942441653 +0000 UTC m=+3804.462807947" watchObservedRunningTime="2026-02-01 15:24:22.947433404 +0000 UTC m=+3804.467799678" Feb 01 15:24:27 crc kubenswrapper[4820]: I0201 15:24:27.145185 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:27 crc kubenswrapper[4820]: I0201 15:24:27.145769 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:27 crc kubenswrapper[4820]: I0201 15:24:27.492535 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:27 crc kubenswrapper[4820]: I0201 15:24:27.982125 4820 generic.go:334] "Generic (PLEG): container finished" podID="a15554c7-8e38-4bbe-8ff2-e7ff10bedb40" containerID="fdcff894861229422e925786a0de4a197f08e66b75421e97365ce2e81486eb84" exitCode=0 Feb 01 15:24:27 crc kubenswrapper[4820]: I0201 15:24:27.982194 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/crc-debug-q29k8" event={"ID":"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40","Type":"ContainerDied","Data":"fdcff894861229422e925786a0de4a197f08e66b75421e97365ce2e81486eb84"} Feb 01 15:24:28 crc kubenswrapper[4820]: I0201 15:24:28.044816 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:28 crc kubenswrapper[4820]: I0201 15:24:28.090471 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6pqs"] Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.374508 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.421147 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-q29k8"] Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.431482 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-q29k8"] Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.480496 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-host\") pod \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\" (UID: \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\") " Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.480760 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2bqb\" (UniqueName: \"kubernetes.io/projected/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-kube-api-access-k2bqb\") pod \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\" (UID: \"a15554c7-8e38-4bbe-8ff2-e7ff10bedb40\") " Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.480642 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-host" (OuterVolumeSpecName: "host") pod "a15554c7-8e38-4bbe-8ff2-e7ff10bedb40" (UID: "a15554c7-8e38-4bbe-8ff2-e7ff10bedb40"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.481534 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-host\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.495081 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-kube-api-access-k2bqb" (OuterVolumeSpecName: "kube-api-access-k2bqb") pod "a15554c7-8e38-4bbe-8ff2-e7ff10bedb40" (UID: "a15554c7-8e38-4bbe-8ff2-e7ff10bedb40"). InnerVolumeSpecName "kube-api-access-k2bqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:24:29 crc kubenswrapper[4820]: I0201 15:24:29.583733 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2bqb\" (UniqueName: \"kubernetes.io/projected/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40-kube-api-access-k2bqb\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.004567 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cecf0845b0559b0901c6541b90745cd9e4065a7317480dd0aa4b79acd68604e6" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.004576 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-q29k8" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.004735 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6pqs" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerName="registry-server" containerID="cri-o://ff5edea11289235bbe4d7c69704f625431020c33d647b0102079285e1f456855" gracePeriod=2 Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.590950 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-rxkht"] Feb 01 15:24:30 crc kubenswrapper[4820]: E0201 15:24:30.593345 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15554c7-8e38-4bbe-8ff2-e7ff10bedb40" containerName="container-00" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.593428 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15554c7-8e38-4bbe-8ff2-e7ff10bedb40" containerName="container-00" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.593703 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15554c7-8e38-4bbe-8ff2-e7ff10bedb40" containerName="container-00" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.594398 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.705460 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpx5m\" (UniqueName: \"kubernetes.io/projected/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-kube-api-access-rpx5m\") pod \"crc-debug-rxkht\" (UID: \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\") " pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.705593 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-host\") pod \"crc-debug-rxkht\" (UID: \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\") " pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.809097 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpx5m\" (UniqueName: \"kubernetes.io/projected/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-kube-api-access-rpx5m\") pod \"crc-debug-rxkht\" (UID: \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\") " pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.809257 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-host\") pod \"crc-debug-rxkht\" (UID: \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\") " pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.809541 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-host\") pod \"crc-debug-rxkht\" (UID: \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\") " pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.826311 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpx5m\" (UniqueName: \"kubernetes.io/projected/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-kube-api-access-rpx5m\") pod \"crc-debug-rxkht\" (UID: \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\") " pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:30 crc kubenswrapper[4820]: I0201 15:24:30.909629 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.014123 4820 generic.go:334] "Generic (PLEG): container finished" podID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerID="ff5edea11289235bbe4d7c69704f625431020c33d647b0102079285e1f456855" exitCode=0 Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.014191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6pqs" event={"ID":"fa85845c-cfaf-4168-9b8c-d8b79b67f37c","Type":"ContainerDied","Data":"ff5edea11289235bbe4d7c69704f625431020c33d647b0102079285e1f456855"} Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.014516 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6pqs" event={"ID":"fa85845c-cfaf-4168-9b8c-d8b79b67f37c","Type":"ContainerDied","Data":"97760b48fea74d9d9abd86492d8c75f55a620cd918633d112a70af82e39c7f25"} Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.014532 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97760b48fea74d9d9abd86492d8c75f55a620cd918633d112a70af82e39c7f25" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.017108 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/crc-debug-rxkht" event={"ID":"8b63a5cd-b7c9-4133-9247-14f3c653bc2b","Type":"ContainerStarted","Data":"7fd709fa4ee1f3d780adb899f5dc1db34c70e310807ce15802c4f3f9f3da0b0a"} Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.041868 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.115725 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-catalog-content\") pod \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.115783 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsg8w\" (UniqueName: \"kubernetes.io/projected/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-kube-api-access-gsg8w\") pod \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.115820 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-utilities\") pod \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\" (UID: \"fa85845c-cfaf-4168-9b8c-d8b79b67f37c\") " Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.122056 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-utilities" (OuterVolumeSpecName: "utilities") pod "fa85845c-cfaf-4168-9b8c-d8b79b67f37c" (UID: "fa85845c-cfaf-4168-9b8c-d8b79b67f37c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.130497 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-kube-api-access-gsg8w" (OuterVolumeSpecName: "kube-api-access-gsg8w") pod "fa85845c-cfaf-4168-9b8c-d8b79b67f37c" (UID: "fa85845c-cfaf-4168-9b8c-d8b79b67f37c"). InnerVolumeSpecName "kube-api-access-gsg8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.170586 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa85845c-cfaf-4168-9b8c-d8b79b67f37c" (UID: "fa85845c-cfaf-4168-9b8c-d8b79b67f37c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.217348 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15554c7-8e38-4bbe-8ff2-e7ff10bedb40" path="/var/lib/kubelet/pods/a15554c7-8e38-4bbe-8ff2-e7ff10bedb40/volumes" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.220946 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsg8w\" (UniqueName: \"kubernetes.io/projected/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-kube-api-access-gsg8w\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.220968 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:31 crc kubenswrapper[4820]: I0201 15:24:31.220977 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa85845c-cfaf-4168-9b8c-d8b79b67f37c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:32 crc kubenswrapper[4820]: I0201 15:24:32.028461 4820 generic.go:334] "Generic (PLEG): container finished" podID="8b63a5cd-b7c9-4133-9247-14f3c653bc2b" containerID="85f05aaa00cb83f11459f7002f09ba7a5761f11c9d4a962bca9dab703e71f08e" exitCode=0 Feb 01 15:24:32 crc kubenswrapper[4820]: I0201 15:24:32.028521 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/crc-debug-rxkht" event={"ID":"8b63a5cd-b7c9-4133-9247-14f3c653bc2b","Type":"ContainerDied","Data":"85f05aaa00cb83f11459f7002f09ba7a5761f11c9d4a962bca9dab703e71f08e"} Feb 01 15:24:32 crc kubenswrapper[4820]: I0201 15:24:32.029098 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6pqs" Feb 01 15:24:32 crc kubenswrapper[4820]: I0201 15:24:32.069475 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6pqs"] Feb 01 15:24:32 crc kubenswrapper[4820]: I0201 15:24:32.083057 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6pqs"] Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.128557 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.208021 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" path="/var/lib/kubelet/pods/fa85845c-cfaf-4168-9b8c-d8b79b67f37c/volumes" Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.255105 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpx5m\" (UniqueName: \"kubernetes.io/projected/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-kube-api-access-rpx5m\") pod \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\" (UID: \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\") " Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.255175 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-host\") pod \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\" (UID: \"8b63a5cd-b7c9-4133-9247-14f3c653bc2b\") " Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.255810 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-host" (OuterVolumeSpecName: "host") pod "8b63a5cd-b7c9-4133-9247-14f3c653bc2b" (UID: "8b63a5cd-b7c9-4133-9247-14f3c653bc2b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.268058 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-kube-api-access-rpx5m" (OuterVolumeSpecName: "kube-api-access-rpx5m") pod "8b63a5cd-b7c9-4133-9247-14f3c653bc2b" (UID: "8b63a5cd-b7c9-4133-9247-14f3c653bc2b"). InnerVolumeSpecName "kube-api-access-rpx5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.357140 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpx5m\" (UniqueName: \"kubernetes.io/projected/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-kube-api-access-rpx5m\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.357171 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b63a5cd-b7c9-4133-9247-14f3c653bc2b-host\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.644038 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-rxkht"] Feb 01 15:24:33 crc kubenswrapper[4820]: I0201 15:24:33.651326 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-rxkht"] Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.044676 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fd709fa4ee1f3d780adb899f5dc1db34c70e310807ce15802c4f3f9f3da0b0a" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.044728 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-rxkht" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.198979 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:24:34 crc kubenswrapper[4820]: E0201 15:24:34.199728 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.876217 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-btbjq"] Feb 01 15:24:34 crc kubenswrapper[4820]: E0201 15:24:34.876579 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerName="extract-content" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.876600 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerName="extract-content" Feb 01 15:24:34 crc kubenswrapper[4820]: E0201 15:24:34.876626 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerName="extract-utilities" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.876636 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerName="extract-utilities" Feb 01 15:24:34 crc kubenswrapper[4820]: E0201 15:24:34.876648 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b63a5cd-b7c9-4133-9247-14f3c653bc2b" containerName="container-00" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.876655 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b63a5cd-b7c9-4133-9247-14f3c653bc2b" containerName="container-00" Feb 01 15:24:34 crc kubenswrapper[4820]: E0201 15:24:34.876673 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerName="registry-server" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.876679 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerName="registry-server" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.876888 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa85845c-cfaf-4168-9b8c-d8b79b67f37c" containerName="registry-server" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.876909 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b63a5cd-b7c9-4133-9247-14f3c653bc2b" containerName="container-00" Feb 01 15:24:34 crc kubenswrapper[4820]: I0201 15:24:34.877579 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:35 crc kubenswrapper[4820]: I0201 15:24:35.002711 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw65s\" (UniqueName: \"kubernetes.io/projected/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-kube-api-access-dw65s\") pod \"crc-debug-btbjq\" (UID: \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\") " pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:35 crc kubenswrapper[4820]: I0201 15:24:35.002851 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-host\") pod \"crc-debug-btbjq\" (UID: \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\") " pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:35 crc kubenswrapper[4820]: I0201 15:24:35.104527 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw65s\" (UniqueName: \"kubernetes.io/projected/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-kube-api-access-dw65s\") pod \"crc-debug-btbjq\" (UID: \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\") " pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:35 crc kubenswrapper[4820]: I0201 15:24:35.104697 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-host\") pod \"crc-debug-btbjq\" (UID: \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\") " pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:35 crc kubenswrapper[4820]: I0201 15:24:35.104829 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-host\") pod \"crc-debug-btbjq\" (UID: \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\") " pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:35 crc kubenswrapper[4820]: I0201 15:24:35.123238 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw65s\" (UniqueName: \"kubernetes.io/projected/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-kube-api-access-dw65s\") pod \"crc-debug-btbjq\" (UID: \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\") " pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:35 crc kubenswrapper[4820]: I0201 15:24:35.197291 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:35 crc kubenswrapper[4820]: I0201 15:24:35.212425 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b63a5cd-b7c9-4133-9247-14f3c653bc2b" path="/var/lib/kubelet/pods/8b63a5cd-b7c9-4133-9247-14f3c653bc2b/volumes" Feb 01 15:24:35 crc kubenswrapper[4820]: W0201 15:24:35.236069 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2eb97cf_bdf9_40ee_a46f_3cbb9d749c4c.slice/crio-6432a58f0ee622b60d318632ea913f870f3e5ca4ec7466c3620bde7704f2adce WatchSource:0}: Error finding container 6432a58f0ee622b60d318632ea913f870f3e5ca4ec7466c3620bde7704f2adce: Status 404 returned error can't find the container with id 6432a58f0ee622b60d318632ea913f870f3e5ca4ec7466c3620bde7704f2adce Feb 01 15:24:36 crc kubenswrapper[4820]: I0201 15:24:36.067536 4820 generic.go:334] "Generic (PLEG): container finished" podID="f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c" containerID="c774558787ffbdd07bdbc63465f56f4ed249671f27d7b1a5afc18c82c5acf104" exitCode=0 Feb 01 15:24:36 crc kubenswrapper[4820]: I0201 15:24:36.067691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/crc-debug-btbjq" event={"ID":"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c","Type":"ContainerDied","Data":"c774558787ffbdd07bdbc63465f56f4ed249671f27d7b1a5afc18c82c5acf104"} Feb 01 15:24:36 crc kubenswrapper[4820]: I0201 15:24:36.067927 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/crc-debug-btbjq" event={"ID":"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c","Type":"ContainerStarted","Data":"6432a58f0ee622b60d318632ea913f870f3e5ca4ec7466c3620bde7704f2adce"} Feb 01 15:24:36 crc kubenswrapper[4820]: I0201 15:24:36.132369 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-btbjq"] Feb 01 15:24:36 crc kubenswrapper[4820]: I0201 15:24:36.142707 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ng5b5/crc-debug-btbjq"] Feb 01 15:24:37 crc kubenswrapper[4820]: I0201 15:24:37.220943 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:37 crc kubenswrapper[4820]: I0201 15:24:37.353452 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw65s\" (UniqueName: \"kubernetes.io/projected/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-kube-api-access-dw65s\") pod \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\" (UID: \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\") " Feb 01 15:24:37 crc kubenswrapper[4820]: I0201 15:24:37.353646 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-host\") pod \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\" (UID: \"f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c\") " Feb 01 15:24:37 crc kubenswrapper[4820]: I0201 15:24:37.353724 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-host" (OuterVolumeSpecName: "host") pod "f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c" (UID: "f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 15:24:37 crc kubenswrapper[4820]: I0201 15:24:37.354453 4820 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-host\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:37 crc kubenswrapper[4820]: I0201 15:24:37.362000 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-kube-api-access-dw65s" (OuterVolumeSpecName: "kube-api-access-dw65s") pod "f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c" (UID: "f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c"). InnerVolumeSpecName "kube-api-access-dw65s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:24:37 crc kubenswrapper[4820]: I0201 15:24:37.455868 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw65s\" (UniqueName: \"kubernetes.io/projected/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c-kube-api-access-dw65s\") on node \"crc\" DevicePath \"\"" Feb 01 15:24:38 crc kubenswrapper[4820]: I0201 15:24:38.088056 4820 scope.go:117] "RemoveContainer" containerID="c774558787ffbdd07bdbc63465f56f4ed249671f27d7b1a5afc18c82c5acf104" Feb 01 15:24:38 crc kubenswrapper[4820]: I0201 15:24:38.088100 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/crc-debug-btbjq" Feb 01 15:24:39 crc kubenswrapper[4820]: I0201 15:24:39.211176 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c" path="/var/lib/kubelet/pods/f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c/volumes" Feb 01 15:24:47 crc kubenswrapper[4820]: I0201 15:24:47.199672 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:24:47 crc kubenswrapper[4820]: E0201 15:24:47.200617 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:24:59 crc kubenswrapper[4820]: I0201 15:24:59.204435 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:24:59 crc kubenswrapper[4820]: E0201 15:24:59.205388 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:25:07 crc kubenswrapper[4820]: I0201 15:25:07.470760 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-84cfb79c88-556g7_3582e8fb-39ef-4974-ba08-1cf88f7fc83e/barbican-api/0.log" Feb 01 15:25:07 crc kubenswrapper[4820]: I0201 15:25:07.645356 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-84cfb79c88-556g7_3582e8fb-39ef-4974-ba08-1cf88f7fc83e/barbican-api-log/0.log" Feb 01 15:25:07 crc kubenswrapper[4820]: I0201 15:25:07.653492 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-698cc4bfb6-j2k5c_3bf7831d-dffe-4ed1-bfe4-94e787f63f67/barbican-keystone-listener/0.log" Feb 01 15:25:07 crc kubenswrapper[4820]: I0201 15:25:07.872246 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f46fc99f5-2w2vx_b32621b0-3167-4e95-bd7a-e34b45dca08e/barbican-worker/0.log" Feb 01 15:25:07 crc kubenswrapper[4820]: I0201 15:25:07.935180 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-698cc4bfb6-j2k5c_3bf7831d-dffe-4ed1-bfe4-94e787f63f67/barbican-keystone-listener-log/0.log" Feb 01 15:25:07 crc kubenswrapper[4820]: I0201 15:25:07.972063 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f46fc99f5-2w2vx_b32621b0-3167-4e95-bd7a-e34b45dca08e/barbican-worker-log/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.065241 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xfmgh_8fd3481e-91d0-45af-9796-cdca54b2b647/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.183618 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_570bc5fa-118c-47b4-84fa-fac88c5dc213/ceilometer-central-agent/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.403590 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_570bc5fa-118c-47b4-84fa-fac88c5dc213/ceilometer-notification-agent/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.442579 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_570bc5fa-118c-47b4-84fa-fac88c5dc213/proxy-httpd/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.524298 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_570bc5fa-118c-47b4-84fa-fac88c5dc213/sg-core/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.607381 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-ns7h4_f1d2dd40-e56b-4ca1-9a71-17fbb175f20f/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.765892 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wwzm2_a0b2fa04-9143-40a5-840e-72d5753f954b/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.877551 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_71d67552-8350-45f8-8657-61363724da90/cinder-api-log/0.log" Feb 01 15:25:08 crc kubenswrapper[4820]: I0201 15:25:08.968934 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_71d67552-8350-45f8-8657-61363724da90/cinder-api/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.136045 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2efc1e63-fe88-414b-accb-7a48e72f12d6/probe/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.194578 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2efc1e63-fe88-414b-accb-7a48e72f12d6/cinder-backup/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.254456 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_03581fce-bf82-4dd7-8ebe-37e338ba81dc/cinder-scheduler/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.353223 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_03581fce-bf82-4dd7-8ebe-37e338ba81dc/probe/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.414854 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_8dabdc80-a117-4e53-9b1d-b8af575ba10f/cinder-volume/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.508129 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_8dabdc80-a117-4e53-9b1d-b8af575ba10f/probe/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.712936 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-58x6d_ccf053de-7e43-4c69-88c0-89c1b4d4832e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.738054 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mhzxf_f1d46ee7-c695-461e-8f2b-8cccfcf2ca1a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:09 crc kubenswrapper[4820]: I0201 15:25:09.873020 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ll64p_9ada9dd3-3a44-4050-8e20-76afe7a02f4c/init/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.021588 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ll64p_9ada9dd3-3a44-4050-8e20-76afe7a02f4c/init/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.064038 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_939be59c-3ae2-4a0e-ade1-c491cb03289e/glance-httpd/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.129450 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-ll64p_9ada9dd3-3a44-4050-8e20-76afe7a02f4c/dnsmasq-dns/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.234947 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_939be59c-3ae2-4a0e-ade1-c491cb03289e/glance-log/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.275232 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c528ece8-d372-4479-b08e-cd5e12306def/glance-httpd/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.336523 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c528ece8-d372-4479-b08e-cd5e12306def/glance-log/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.597962 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69c64959b6-498kr_35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18/horizon/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.673536 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69c64959b6-498kr_35ed5c7d-a0cb-4a17-ae1a-cedabb9c9b18/horizon-log/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.698801 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-p6xp4_14aabcc4-ac5c-416f-8887-988e9292625b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:10 crc kubenswrapper[4820]: I0201 15:25:10.809453 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-f8qrq_b0b37762-2ced-48f3-b7f1-8e2b14cb2fee/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:11 crc kubenswrapper[4820]: I0201 15:25:11.037807 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29499301-92jj6_acc829d8-a18c-48cc-8b6d-4516a64c1de9/keystone-cron/0.log" Feb 01 15:25:11 crc kubenswrapper[4820]: I0201 15:25:11.151760 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f94e4784-db7a-4f03-ac77-8a50d0b3479d/kube-state-metrics/0.log" Feb 01 15:25:11 crc kubenswrapper[4820]: I0201 15:25:11.468430 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vw2mg_c5f9fa3b-6e7e-41da-9742-1d7d0a5b0128/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:11 crc kubenswrapper[4820]: I0201 15:25:11.749515 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485/manila-api/0.log" Feb 01 15:25:11 crc kubenswrapper[4820]: I0201 15:25:11.803695 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7bb96c9945-brtg8_42aa9479-729d-4c7d-a048-8db0d4434679/keystone-api/0.log" Feb 01 15:25:11 crc kubenswrapper[4820]: I0201 15:25:11.884664 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_21ae71fc-043a-49f0-8b83-5c64edfb9e9c/probe/0.log" Feb 01 15:25:11 crc kubenswrapper[4820]: I0201 15:25:11.893509 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_21ae71fc-043a-49f0-8b83-5c64edfb9e9c/manila-scheduler/0.log" Feb 01 15:25:12 crc kubenswrapper[4820]: I0201 15:25:12.045347 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_7067e66b-2ec5-405e-871a-aad4fd9fd5cd/probe/0.log" Feb 01 15:25:12 crc kubenswrapper[4820]: I0201 15:25:12.285663 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_7067e66b-2ec5-405e-871a-aad4fd9fd5cd/manila-share/0.log" Feb 01 15:25:12 crc kubenswrapper[4820]: I0201 15:25:12.315981 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_46bb2a2c-2a27-4a6c-8e18-d7b4cdc11485/manila-api-log/0.log" Feb 01 15:25:12 crc kubenswrapper[4820]: I0201 15:25:12.416712 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8bc6c8777-j2rkq_6e78848a-f345-4cf8-a149-4d2a3d6de52e/neutron-api/0.log" Feb 01 15:25:12 crc kubenswrapper[4820]: I0201 15:25:12.488037 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8bc6c8777-j2rkq_6e78848a-f345-4cf8-a149-4d2a3d6de52e/neutron-httpd/0.log" Feb 01 15:25:12 crc kubenswrapper[4820]: I0201 15:25:12.580419 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-whrbb_588885cf-0582-447f-8eca-9580725ecc0e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:12 crc kubenswrapper[4820]: I0201 15:25:12.851100 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2fd1b280-fb87-44c5-ab0e-fff3fedfff7d/nova-api-log/0.log" Feb 01 15:25:12 crc kubenswrapper[4820]: I0201 15:25:12.989456 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f2655f84-261a-48ba-b019-d2e11ed4e80e/nova-cell0-conductor-conductor/0.log" Feb 01 15:25:13 crc kubenswrapper[4820]: I0201 15:25:13.079974 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2fd1b280-fb87-44c5-ab0e-fff3fedfff7d/nova-api-api/0.log" Feb 01 15:25:13 crc kubenswrapper[4820]: I0201 15:25:13.154012 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1f6d787d-a27d-4f53-aa4c-794b09283f9e/nova-cell1-conductor-conductor/0.log" Feb 01 15:25:13 crc kubenswrapper[4820]: I0201 15:25:13.894952 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7njn2_1573e690-1a23-4563-806d-8023f7d44c43/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:13 crc kubenswrapper[4820]: I0201 15:25:13.920113 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_75b482ca-64e8-42a9-b8f2-5272e591448b/nova-cell1-novncproxy-novncproxy/0.log" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.087039 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_92d962ec-13d6-4839-be99-72ecf5dd3980/nova-metadata-log/0.log" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.199657 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:25:14 crc kubenswrapper[4820]: E0201 15:25:14.199859 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.229667 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_36f2dcef-35d9-4ef2-b4b8-afd55a7683c5/nova-scheduler-scheduler/0.log" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.333727 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_530f8225-6e15-4177-9b64-24e4f767f6c5/mysql-bootstrap/0.log" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.477095 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_530f8225-6e15-4177-9b64-24e4f767f6c5/mysql-bootstrap/0.log" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.639752 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_530f8225-6e15-4177-9b64-24e4f767f6c5/galera/0.log" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.660683 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b/mysql-bootstrap/0.log" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.906088 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b/galera/0.log" Feb 01 15:25:14 crc kubenswrapper[4820]: I0201 15:25:14.942298 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e3c7d7c1-20b6-4370-a0ae-da76c3b59c3b/mysql-bootstrap/0.log" Feb 01 15:25:15 crc kubenswrapper[4820]: I0201 15:25:15.420605 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-l4qhs_e613c7d2-9b6b-4e3c-93d7-617b826931c7/openstack-network-exporter/0.log" Feb 01 15:25:15 crc kubenswrapper[4820]: I0201 15:25:15.460684 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_06eafee6-9b74-4559-ba89-633ab4f4f036/openstackclient/0.log" Feb 01 15:25:15 crc kubenswrapper[4820]: I0201 15:25:15.483337 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_92d962ec-13d6-4839-be99-72ecf5dd3980/nova-metadata-metadata/0.log" Feb 01 15:25:15 crc kubenswrapper[4820]: I0201 15:25:15.656889 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rqms2_75447ae7-687e-45f6-925f-7091cd5c8930/ovsdb-server-init/0.log" Feb 01 15:25:15 crc kubenswrapper[4820]: I0201 15:25:15.858087 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rqms2_75447ae7-687e-45f6-925f-7091cd5c8930/ovsdb-server/0.log" Feb 01 15:25:15 crc kubenswrapper[4820]: I0201 15:25:15.883270 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rqms2_75447ae7-687e-45f6-925f-7091cd5c8930/ovs-vswitchd/0.log" Feb 01 15:25:15 crc kubenswrapper[4820]: I0201 15:25:15.884972 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rqms2_75447ae7-687e-45f6-925f-7091cd5c8930/ovsdb-server-init/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.078412 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wrqhs_0badf713-2cde-439b-8a0e-c2eedac05b99/ovn-controller/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.158056 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-k58zp_0bcf6e2a-cd11-4f34-bb8e-20002b75fb34/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.294217 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a1b1815f-5154-480a-be95-af29b7635c0c/openstack-network-exporter/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.327140 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a1b1815f-5154-480a-be95-af29b7635c0c/ovn-northd/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.369510 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_48888a7d-f908-4f3c-9335-d2ebe8e19690/openstack-network-exporter/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.527285 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_48888a7d-f908-4f3c-9335-d2ebe8e19690/ovsdbserver-nb/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.620514 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_584885db-41e8-4667-96bd-3f180ac41ae4/ovsdbserver-sb/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.622800 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_584885db-41e8-4667-96bd-3f180ac41ae4/openstack-network-exporter/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.869156 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bf6b5fdf4-gb9fp_ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5/placement-api/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.899402 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bf6b5fdf4-gb9fp_ceecf13f-3dc2-4bb3-ab31-c7ded4efa2c5/placement-log/0.log" Feb 01 15:25:16 crc kubenswrapper[4820]: I0201 15:25:16.982368 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec613d0e-38d1-4ba1-950c-130a412ace9b/setup-container/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.130113 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec613d0e-38d1-4ba1-950c-130a412ace9b/setup-container/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.208446 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec613d0e-38d1-4ba1-950c-130a412ace9b/rabbitmq/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.230676 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_68ed3721-fba2-41c2-bb4a-ee20df021175/setup-container/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.408483 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_68ed3721-fba2-41c2-bb4a-ee20df021175/setup-container/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.422063 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_68ed3721-fba2-41c2-bb4a-ee20df021175/rabbitmq/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.499771 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-9zb7b_5383bb66-a0ea-4b49-8f0a-0a2d9e71d425/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.641267 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-hs5kb_5297ced9-f7cd-4f62-8cbc-560f6395b5ea/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.769801 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4dbwj_ffeb1c3b-061a-4f96-95c1-69011e7f7028/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:17 crc kubenswrapper[4820]: I0201 15:25:17.853426 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jzj27_9d8f9e59-fa16-4f98-84f2-0b66514e6a0f/ssh-known-hosts-edpm-deployment/0.log" Feb 01 15:25:18 crc kubenswrapper[4820]: I0201 15:25:18.104869 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9b040975-c603-4b7a-875c-c372ddb0e24e/tempest-tests-tempest-tests-runner/0.log" Feb 01 15:25:18 crc kubenswrapper[4820]: I0201 15:25:18.110836 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_64854884-2922-44c0-8ae0-dc3e8d3e2b17/test-operator-logs-container/0.log" Feb 01 15:25:18 crc kubenswrapper[4820]: I0201 15:25:18.284094 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zpsvv_72f3578c-0dac-4edd-ad36-85a7b7930c01/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 01 15:25:28 crc kubenswrapper[4820]: I0201 15:25:28.199724 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:25:28 crc kubenswrapper[4820]: E0201 15:25:28.200503 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:25:37 crc kubenswrapper[4820]: I0201 15:25:37.038624 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_aa35f7d5-c06d-4d2e-8806-c6e638c9db02/memcached/0.log" Feb 01 15:25:41 crc kubenswrapper[4820]: I0201 15:25:41.198551 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:25:41 crc kubenswrapper[4820]: E0201 15:25:41.199556 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:25:44 crc kubenswrapper[4820]: I0201 15:25:44.353994 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65_59b8afbb-2970-4526-be1a-4a52966f2387/util/0.log" Feb 01 15:25:44 crc kubenswrapper[4820]: I0201 15:25:44.504216 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65_59b8afbb-2970-4526-be1a-4a52966f2387/util/0.log" Feb 01 15:25:44 crc kubenswrapper[4820]: I0201 15:25:44.545710 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65_59b8afbb-2970-4526-be1a-4a52966f2387/pull/0.log" Feb 01 15:25:44 crc kubenswrapper[4820]: I0201 15:25:44.600695 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65_59b8afbb-2970-4526-be1a-4a52966f2387/pull/0.log" Feb 01 15:25:44 crc kubenswrapper[4820]: I0201 15:25:44.829761 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65_59b8afbb-2970-4526-be1a-4a52966f2387/util/0.log" Feb 01 15:25:44 crc kubenswrapper[4820]: I0201 15:25:44.830318 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65_59b8afbb-2970-4526-be1a-4a52966f2387/extract/0.log" Feb 01 15:25:44 crc kubenswrapper[4820]: I0201 15:25:44.859757 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5b97366a7a314bf7ac04bcea53c010ad01dc7def19c67c4467b482118d7kl65_59b8afbb-2970-4526-be1a-4a52966f2387/pull/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.035775 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-wnzxh_52e7e427-3656-4c7a-afbb-a7cbd89d9318/manager/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.141888 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-9fz5b_4a8d372d-6d1c-4366-983c-947ccd5403e6/manager/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.187775 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-rf496_90739996-10ed-4ad6-b704-012fafc3505f/manager/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.412569 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-slbdv_92bf88c4-d908-4d13-be4e-72936688113c/manager/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.414061 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-h9f62_752637ba-3ba3-4b87-89f4-d2a079743b70/manager/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.566093 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-fqfvc_670d53e4-21ca-4ec7-b72b-0e938b2d85e8/manager/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.831390 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-z9s78_ccd8a397-de84-44f7-ae90-f0e7a51f2d80/manager/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.854589 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-tbjvq_09ceaf4b-4a63-4ba6-9b77-ac550850ffe4/manager/0.log" Feb 01 15:25:45 crc kubenswrapper[4820]: I0201 15:25:45.986671 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-x5p2q_abe5d51e-a818-4e88-93d9-4f53a8d368b4/manager/0.log" Feb 01 15:25:46 crc kubenswrapper[4820]: I0201 15:25:46.047141 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-d85699b78-lhgzd_e5493c34-468b-4636-b236-9b5cb6e95de1/manager/0.log" Feb 01 15:25:46 crc kubenswrapper[4820]: I0201 15:25:46.181205 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-h5tlb_7524b653-9c87-474e-b819-ebdd2864815c/manager/0.log" Feb 01 15:25:46 crc kubenswrapper[4820]: I0201 15:25:46.288848 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-54ttw_39a1a56a-4224-411d-95d1-49cf679d773e/manager/0.log" Feb 01 15:25:46 crc kubenswrapper[4820]: I0201 15:25:46.453731 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-vn95k_a56bcd98-4612-4d2a-b172-78d775c10b6a/manager/0.log" Feb 01 15:25:46 crc kubenswrapper[4820]: I0201 15:25:46.477420 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-6lqdn_7cc4d222-cdf7-4f68-b8b0-eb9a01b0cc12/manager/0.log" Feb 01 15:25:46 crc kubenswrapper[4820]: I0201 15:25:46.603223 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dqbjfh_597a3429-85c5-4e47-983e-c77f2ccc22d3/manager/0.log" Feb 01 15:25:46 crc kubenswrapper[4820]: I0201 15:25:46.761608 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-57d5594849-c9q67_cd9d8871-b20a-4c21-b59e-3a7610021960/operator/0.log" Feb 01 15:25:46 crc kubenswrapper[4820]: I0201 15:25:46.930277 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-kd6lp_500ca2ff-1d42-4876-9838-3e11b923e72e/registry-server/0.log" Feb 01 15:25:47 crc kubenswrapper[4820]: I0201 15:25:47.118997 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-mljht_07b45e70-e382-4778-8d7e-f360ab63dcf9/manager/0.log" Feb 01 15:25:47 crc kubenswrapper[4820]: I0201 15:25:47.294079 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-g7844_d4bd9854-7d67-4698-a00a-61ea442cc25b/manager/0.log" Feb 01 15:25:47 crc kubenswrapper[4820]: I0201 15:25:47.433853 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qwxns_c560d3b1-caa9-4125-bc88-cdcdfb7ed651/operator/0.log" Feb 01 15:25:47 crc kubenswrapper[4820]: I0201 15:25:47.551825 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-nqvp8_3f3881e5-dfdc-4d18-9d61-78439a69d0cb/manager/0.log" Feb 01 15:25:47 crc kubenswrapper[4820]: I0201 15:25:47.873468 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-c65qc_2036e257-d3cc-48a2-bfc5-d3a262090c1e/manager/0.log" Feb 01 15:25:47 crc kubenswrapper[4820]: I0201 15:25:47.950289 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-5dxhd_9a9e9d77-ca3a-4cd4-b39a-75f8730f12f6/manager/0.log" Feb 01 15:25:48 crc kubenswrapper[4820]: I0201 15:25:48.012682 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86d788bc79-jdccp_b1b641e7-c3a8-4f7e-89c7-e362a3080f70/manager/0.log" Feb 01 15:25:48 crc kubenswrapper[4820]: I0201 15:25:48.105750 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-4mbst_247ddb96-9b27-41f5-8890-8bf2dd175c36/manager/0.log" Feb 01 15:25:52 crc kubenswrapper[4820]: I0201 15:25:52.199708 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 01 15:25:56 crc kubenswrapper[4820]: I0201 15:25:56.198674 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:25:56 crc kubenswrapper[4820]: E0201 15:25:56.199486 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:26:08 crc kubenswrapper[4820]: I0201 15:26:08.198830 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:26:08 crc kubenswrapper[4820]: E0201 15:26:08.199761 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:26:09 crc kubenswrapper[4820]: I0201 15:26:09.257418 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mnjbh_5e9a3678-76c3-40e5-861d-3e8eb68cd783/control-plane-machine-set-operator/0.log" Feb 01 15:26:09 crc kubenswrapper[4820]: I0201 15:26:09.441851 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kh9nd_3daddc7f-d4d1-4682-97c2-b10266a3ab44/kube-rbac-proxy/0.log" Feb 01 15:26:09 crc kubenswrapper[4820]: I0201 15:26:09.442553 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kh9nd_3daddc7f-d4d1-4682-97c2-b10266a3ab44/machine-api-operator/0.log" Feb 01 15:26:22 crc kubenswrapper[4820]: I0201 15:26:22.198758 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:26:23 crc kubenswrapper[4820]: I0201 15:26:23.084191 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"3cc1d690e95fc450ab3fa44774018efb8f1b2cbe45f56a3847bca6c5ddad7518"} Feb 01 15:26:23 crc kubenswrapper[4820]: I0201 15:26:23.257703 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-s2v7v_abf63078-c1c6-489d-a164-ed6693f7cc18/cert-manager-controller/0.log" Feb 01 15:26:23 crc kubenswrapper[4820]: I0201 15:26:23.442184 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-l9lrd_0583e704-ae2a-4945-bb44-316d410c6600/cert-manager-cainjector/0.log" Feb 01 15:26:23 crc kubenswrapper[4820]: I0201 15:26:23.481690 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-77x8h_d6214179-43bd-4284-83fa-62d0799035c3/cert-manager-webhook/0.log" Feb 01 15:26:37 crc kubenswrapper[4820]: I0201 15:26:37.760547 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-s725j_43cf9246-6240-46c1-abba-9e2fcfaa8d15/nmstate-console-plugin/0.log" Feb 01 15:26:37 crc kubenswrapper[4820]: I0201 15:26:37.913338 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xnhkw_f7ada872-1aff-4471-bd66-ec3c2ad0e069/nmstate-handler/0.log" Feb 01 15:26:37 crc kubenswrapper[4820]: I0201 15:26:37.919436 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bcw4x_2c09606f-2d9d-471c-a638-d7d9aef056eb/kube-rbac-proxy/0.log" Feb 01 15:26:37 crc kubenswrapper[4820]: I0201 15:26:37.935379 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bcw4x_2c09606f-2d9d-471c-a638-d7d9aef056eb/nmstate-metrics/0.log" Feb 01 15:26:38 crc kubenswrapper[4820]: I0201 15:26:38.137822 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tlrts_33e9fac7-99b7-41d3-beb0-948c190885c3/nmstate-operator/0.log" Feb 01 15:26:38 crc kubenswrapper[4820]: I0201 15:26:38.164259 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-szpq9_ba7e9c14-2225-4cff-93c0-dc5988f425f0/nmstate-webhook/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.354800 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dg6dc_b8c2d8f0-d9c2-4b9e-802f-ec6a34533357/kube-rbac-proxy/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.434217 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dg6dc_b8c2d8f0-d9c2-4b9e-802f-ec6a34533357/controller/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.599010 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-frr-files/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.732504 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-reloader/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.763354 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-frr-files/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.782220 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-reloader/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.797401 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-metrics/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.943086 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-reloader/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.955743 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-metrics/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.979019 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-metrics/0.log" Feb 01 15:27:06 crc kubenswrapper[4820]: I0201 15:27:06.993193 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-frr-files/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.130323 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-reloader/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.132855 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-metrics/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.134714 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/cp-frr-files/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.170153 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/controller/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.325216 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/frr-metrics/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.332999 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/kube-rbac-proxy/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.392092 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/kube-rbac-proxy-frr/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.539746 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/reloader/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.613839 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-k2kpp_e8f0fec3-947b-4599-8fc7-e588a982471e/frr-k8s-webhook-server/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.759539 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7bcdb74864-zp85l_daa33ef9-cdc3-4aa9-8861-48c4175b8c99/manager/0.log" Feb 01 15:27:07 crc kubenswrapper[4820]: I0201 15:27:07.958431 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56944747c7-brf8w_a374cc63-6735-4835-b5af-b1367ff52823/webhook-server/0.log" Feb 01 15:27:08 crc kubenswrapper[4820]: I0201 15:27:08.044817 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gtf4x_b1fcd8e0-6081-4cd1-9998-82a695f11d62/kube-rbac-proxy/0.log" Feb 01 15:27:08 crc kubenswrapper[4820]: I0201 15:27:08.653166 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gtf4x_b1fcd8e0-6081-4cd1-9998-82a695f11d62/speaker/0.log" Feb 01 15:27:08 crc kubenswrapper[4820]: I0201 15:27:08.745954 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9l7gp_5bb6335f-34a1-4d93-b71c-74b5f62ea699/frr/0.log" Feb 01 15:27:22 crc kubenswrapper[4820]: I0201 15:27:22.465777 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff_fc1ce558-b23f-496a-946a-d25c4fb31282/util/0.log" Feb 01 15:27:22 crc kubenswrapper[4820]: I0201 15:27:22.657908 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff_fc1ce558-b23f-496a-946a-d25c4fb31282/pull/0.log" Feb 01 15:27:22 crc kubenswrapper[4820]: I0201 15:27:22.683116 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff_fc1ce558-b23f-496a-946a-d25c4fb31282/util/0.log" Feb 01 15:27:22 crc kubenswrapper[4820]: I0201 15:27:22.683753 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff_fc1ce558-b23f-496a-946a-d25c4fb31282/pull/0.log" Feb 01 15:27:22 crc kubenswrapper[4820]: I0201 15:27:22.827687 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff_fc1ce558-b23f-496a-946a-d25c4fb31282/util/0.log" Feb 01 15:27:22 crc kubenswrapper[4820]: I0201 15:27:22.861241 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff_fc1ce558-b23f-496a-946a-d25c4fb31282/extract/0.log" Feb 01 15:27:22 crc kubenswrapper[4820]: I0201 15:27:22.864700 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxgzff_fc1ce558-b23f-496a-946a-d25c4fb31282/pull/0.log" Feb 01 15:27:22 crc kubenswrapper[4820]: I0201 15:27:22.984538 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p_11ee074d-38c2-43a0-8d78-f1860a212744/util/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.182720 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p_11ee074d-38c2-43a0-8d78-f1860a212744/util/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.205464 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p_11ee074d-38c2-43a0-8d78-f1860a212744/pull/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.213201 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p_11ee074d-38c2-43a0-8d78-f1860a212744/pull/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.378123 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p_11ee074d-38c2-43a0-8d78-f1860a212744/extract/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.383640 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p_11ee074d-38c2-43a0-8d78-f1860a212744/util/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.384565 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gbn9p_11ee074d-38c2-43a0-8d78-f1860a212744/pull/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.544821 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpw4d_c6606962-84cd-47fe-8555-dd37b9f4f5ed/extract-utilities/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.693559 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpw4d_c6606962-84cd-47fe-8555-dd37b9f4f5ed/extract-utilities/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.703185 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpw4d_c6606962-84cd-47fe-8555-dd37b9f4f5ed/extract-content/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.716680 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpw4d_c6606962-84cd-47fe-8555-dd37b9f4f5ed/extract-content/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.884656 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpw4d_c6606962-84cd-47fe-8555-dd37b9f4f5ed/extract-content/0.log" Feb 01 15:27:23 crc kubenswrapper[4820]: I0201 15:27:23.896989 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpw4d_c6606962-84cd-47fe-8555-dd37b9f4f5ed/extract-utilities/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.135577 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpw4d_c6606962-84cd-47fe-8555-dd37b9f4f5ed/registry-server/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.166206 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5b9w_fdd93d0a-d294-437c-9d6d-b840be862df0/extract-utilities/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.345039 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5b9w_fdd93d0a-d294-437c-9d6d-b840be862df0/extract-content/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.377894 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5b9w_fdd93d0a-d294-437c-9d6d-b840be862df0/extract-utilities/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.424674 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5b9w_fdd93d0a-d294-437c-9d6d-b840be862df0/extract-content/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.547406 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5b9w_fdd93d0a-d294-437c-9d6d-b840be862df0/extract-utilities/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.559234 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5b9w_fdd93d0a-d294-437c-9d6d-b840be862df0/extract-content/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.719825 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jn7nh_0d0c59a0-904e-4386-9a3b-7980f0b1e697/marketplace-operator/0.log" Feb 01 15:27:24 crc kubenswrapper[4820]: I0201 15:27:24.939255 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rv9w_116e78e9-fd64-4d4a-8f86-a7a555f1e36e/extract-utilities/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.135033 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rv9w_116e78e9-fd64-4d4a-8f86-a7a555f1e36e/extract-utilities/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.141547 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rv9w_116e78e9-fd64-4d4a-8f86-a7a555f1e36e/extract-content/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.205480 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5b9w_fdd93d0a-d294-437c-9d6d-b840be862df0/registry-server/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.243389 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rv9w_116e78e9-fd64-4d4a-8f86-a7a555f1e36e/extract-content/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.360280 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rv9w_116e78e9-fd64-4d4a-8f86-a7a555f1e36e/extract-utilities/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.392042 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rv9w_116e78e9-fd64-4d4a-8f86-a7a555f1e36e/extract-content/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.506152 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4rv9w_116e78e9-fd64-4d4a-8f86-a7a555f1e36e/registry-server/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.558408 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hj6_0fc03fc8-6e28-4787-8916-0d53e1b11ae8/extract-utilities/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.704177 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hj6_0fc03fc8-6e28-4787-8916-0d53e1b11ae8/extract-utilities/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.737119 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hj6_0fc03fc8-6e28-4787-8916-0d53e1b11ae8/extract-content/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.742271 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hj6_0fc03fc8-6e28-4787-8916-0d53e1b11ae8/extract-content/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.915318 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hj6_0fc03fc8-6e28-4787-8916-0d53e1b11ae8/extract-utilities/0.log" Feb 01 15:27:25 crc kubenswrapper[4820]: I0201 15:27:25.938922 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hj6_0fc03fc8-6e28-4787-8916-0d53e1b11ae8/extract-content/0.log" Feb 01 15:27:26 crc kubenswrapper[4820]: I0201 15:27:26.410432 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hj6_0fc03fc8-6e28-4787-8916-0d53e1b11ae8/registry-server/0.log" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.043534 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mntjt"] Feb 01 15:27:40 crc kubenswrapper[4820]: E0201 15:27:40.044427 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c" containerName="container-00" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.044502 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c" containerName="container-00" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.044706 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eb97cf-bdf9-40ee-a46f-3cbb9d749c4c" containerName="container-00" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.045916 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.062992 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mntjt"] Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.198613 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspsm\" (UniqueName: \"kubernetes.io/projected/64b4ece8-9445-4399-ad3b-fe17f985fe9d-kube-api-access-cspsm\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.198848 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-utilities\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.199146 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-catalog-content\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.301449 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-utilities\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.301670 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-catalog-content\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.301805 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cspsm\" (UniqueName: \"kubernetes.io/projected/64b4ece8-9445-4399-ad3b-fe17f985fe9d-kube-api-access-cspsm\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.302130 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-utilities\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.302209 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-catalog-content\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.325823 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cspsm\" (UniqueName: \"kubernetes.io/projected/64b4ece8-9445-4399-ad3b-fe17f985fe9d-kube-api-access-cspsm\") pod \"redhat-operators-mntjt\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.363481 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:40 crc kubenswrapper[4820]: I0201 15:27:40.846103 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mntjt"] Feb 01 15:27:41 crc kubenswrapper[4820]: I0201 15:27:41.831383 4820 generic.go:334] "Generic (PLEG): container finished" podID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerID="beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb" exitCode=0 Feb 01 15:27:41 crc kubenswrapper[4820]: I0201 15:27:41.831429 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntjt" event={"ID":"64b4ece8-9445-4399-ad3b-fe17f985fe9d","Type":"ContainerDied","Data":"beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb"} Feb 01 15:27:41 crc kubenswrapper[4820]: I0201 15:27:41.831633 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntjt" event={"ID":"64b4ece8-9445-4399-ad3b-fe17f985fe9d","Type":"ContainerStarted","Data":"17b7714f5851f1ce8b8be72ae98cb2e0e89d68d9e0ef01f75c1fa6df6e47f10e"} Feb 01 15:27:41 crc kubenswrapper[4820]: I0201 15:27:41.834554 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 15:27:43 crc kubenswrapper[4820]: I0201 15:27:43.850476 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntjt" event={"ID":"64b4ece8-9445-4399-ad3b-fe17f985fe9d","Type":"ContainerStarted","Data":"7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82"} Feb 01 15:27:45 crc kubenswrapper[4820]: I0201 15:27:45.867796 4820 generic.go:334] "Generic (PLEG): container finished" podID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerID="7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82" exitCode=0 Feb 01 15:27:45 crc kubenswrapper[4820]: I0201 15:27:45.867871 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntjt" event={"ID":"64b4ece8-9445-4399-ad3b-fe17f985fe9d","Type":"ContainerDied","Data":"7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82"} Feb 01 15:27:46 crc kubenswrapper[4820]: I0201 15:27:46.877369 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntjt" event={"ID":"64b4ece8-9445-4399-ad3b-fe17f985fe9d","Type":"ContainerStarted","Data":"e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593"} Feb 01 15:27:46 crc kubenswrapper[4820]: I0201 15:27:46.906213 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mntjt" podStartSLOduration=2.398706636 podStartE2EDuration="6.906193004s" podCreationTimestamp="2026-02-01 15:27:40 +0000 UTC" firstStartedPulling="2026-02-01 15:27:41.83433326 +0000 UTC m=+4003.354699544" lastFinishedPulling="2026-02-01 15:27:46.341819628 +0000 UTC m=+4007.862185912" observedRunningTime="2026-02-01 15:27:46.903152292 +0000 UTC m=+4008.423518576" watchObservedRunningTime="2026-02-01 15:27:46.906193004 +0000 UTC m=+4008.426559288" Feb 01 15:27:50 crc kubenswrapper[4820]: I0201 15:27:50.364224 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:50 crc kubenswrapper[4820]: I0201 15:27:50.364605 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:27:51 crc kubenswrapper[4820]: I0201 15:27:51.683182 4820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mntjt" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="registry-server" probeResult="failure" output=< Feb 01 15:27:51 crc kubenswrapper[4820]: timeout: failed to connect service ":50051" within 1s Feb 01 15:27:51 crc kubenswrapper[4820]: > Feb 01 15:28:00 crc kubenswrapper[4820]: I0201 15:28:00.415377 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:28:00 crc kubenswrapper[4820]: I0201 15:28:00.461342 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:28:00 crc kubenswrapper[4820]: I0201 15:28:00.654075 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mntjt"] Feb 01 15:28:01 crc kubenswrapper[4820]: E0201 15:28:01.495335 4820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:35094->38.102.83.73:46051: write tcp 38.102.83.73:35094->38.102.83.73:46051: write: broken pipe Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.001198 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mntjt" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="registry-server" containerID="cri-o://e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593" gracePeriod=2 Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.549315 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.645055 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-utilities\") pod \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.645357 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-catalog-content\") pod \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.645462 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cspsm\" (UniqueName: \"kubernetes.io/projected/64b4ece8-9445-4399-ad3b-fe17f985fe9d-kube-api-access-cspsm\") pod \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\" (UID: \"64b4ece8-9445-4399-ad3b-fe17f985fe9d\") " Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.647382 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-utilities" (OuterVolumeSpecName: "utilities") pod "64b4ece8-9445-4399-ad3b-fe17f985fe9d" (UID: "64b4ece8-9445-4399-ad3b-fe17f985fe9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.654913 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b4ece8-9445-4399-ad3b-fe17f985fe9d-kube-api-access-cspsm" (OuterVolumeSpecName: "kube-api-access-cspsm") pod "64b4ece8-9445-4399-ad3b-fe17f985fe9d" (UID: "64b4ece8-9445-4399-ad3b-fe17f985fe9d"). InnerVolumeSpecName "kube-api-access-cspsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.748164 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.748188 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cspsm\" (UniqueName: \"kubernetes.io/projected/64b4ece8-9445-4399-ad3b-fe17f985fe9d-kube-api-access-cspsm\") on node \"crc\" DevicePath \"\"" Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.768020 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64b4ece8-9445-4399-ad3b-fe17f985fe9d" (UID: "64b4ece8-9445-4399-ad3b-fe17f985fe9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:28:02 crc kubenswrapper[4820]: I0201 15:28:02.850326 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4ece8-9445-4399-ad3b-fe17f985fe9d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.013473 4820 generic.go:334] "Generic (PLEG): container finished" podID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerID="e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593" exitCode=0 Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.013523 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntjt" event={"ID":"64b4ece8-9445-4399-ad3b-fe17f985fe9d","Type":"ContainerDied","Data":"e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593"} Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.013554 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntjt" event={"ID":"64b4ece8-9445-4399-ad3b-fe17f985fe9d","Type":"ContainerDied","Data":"17b7714f5851f1ce8b8be72ae98cb2e0e89d68d9e0ef01f75c1fa6df6e47f10e"} Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.013577 4820 scope.go:117] "RemoveContainer" containerID="e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.013592 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mntjt" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.035991 4820 scope.go:117] "RemoveContainer" containerID="7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.049643 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mntjt"] Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.059544 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mntjt"] Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.075850 4820 scope.go:117] "RemoveContainer" containerID="beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.123343 4820 scope.go:117] "RemoveContainer" containerID="e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593" Feb 01 15:28:03 crc kubenswrapper[4820]: E0201 15:28:03.123993 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593\": container with ID starting with e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593 not found: ID does not exist" containerID="e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.124071 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593"} err="failed to get container status \"e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593\": rpc error: code = NotFound desc = could not find container \"e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593\": container with ID starting with e2b68f3b239d70dcac609504c0951ac55dda4044ee7d41f24c93d0f8edc10593 not found: ID does not exist" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.124111 4820 scope.go:117] "RemoveContainer" containerID="7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82" Feb 01 15:28:03 crc kubenswrapper[4820]: E0201 15:28:03.124609 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82\": container with ID starting with 7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82 not found: ID does not exist" containerID="7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.124665 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82"} err="failed to get container status \"7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82\": rpc error: code = NotFound desc = could not find container \"7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82\": container with ID starting with 7cb92305cbf65366a34bfb6d443d16dad53392c2d1d21d5a817ae5fd8e02ec82 not found: ID does not exist" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.124684 4820 scope.go:117] "RemoveContainer" containerID="beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb" Feb 01 15:28:03 crc kubenswrapper[4820]: E0201 15:28:03.125246 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb\": container with ID starting with beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb not found: ID does not exist" containerID="beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.125270 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb"} err="failed to get container status \"beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb\": rpc error: code = NotFound desc = could not find container \"beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb\": container with ID starting with beeecedac6e22cba279989ebcfb0c3c1e2df455fe5fe5a8613a8b5272282c9eb not found: ID does not exist" Feb 01 15:28:03 crc kubenswrapper[4820]: I0201 15:28:03.210919 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" path="/var/lib/kubelet/pods/64b4ece8-9445-4399-ad3b-fe17f985fe9d/volumes" Feb 01 15:28:49 crc kubenswrapper[4820]: I0201 15:28:49.243396 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:28:49 crc kubenswrapper[4820]: I0201 15:28:49.244702 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:29:19 crc kubenswrapper[4820]: I0201 15:29:19.242152 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:29:19 crc kubenswrapper[4820]: I0201 15:29:19.242733 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:29:20 crc kubenswrapper[4820]: I0201 15:29:20.906519 4820 generic.go:334] "Generic (PLEG): container finished" podID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerID="1fe266e3e3eccdeacc76bb4b2478eddd4e69a5b9986e361958afed0eba17e795" exitCode=0 Feb 01 15:29:20 crc kubenswrapper[4820]: I0201 15:29:20.906662 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" event={"ID":"e5777edd-acdc-4e8c-952c-fca23e3a1311","Type":"ContainerDied","Data":"1fe266e3e3eccdeacc76bb4b2478eddd4e69a5b9986e361958afed0eba17e795"} Feb 01 15:29:20 crc kubenswrapper[4820]: I0201 15:29:20.907562 4820 scope.go:117] "RemoveContainer" containerID="1fe266e3e3eccdeacc76bb4b2478eddd4e69a5b9986e361958afed0eba17e795" Feb 01 15:29:21 crc kubenswrapper[4820]: I0201 15:29:21.131058 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ng5b5_must-gather-hlvcm_e5777edd-acdc-4e8c-952c-fca23e3a1311/gather/0.log" Feb 01 15:29:28 crc kubenswrapper[4820]: I0201 15:29:28.697801 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ng5b5/must-gather-hlvcm"] Feb 01 15:29:28 crc kubenswrapper[4820]: I0201 15:29:28.698540 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" podUID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerName="copy" containerID="cri-o://fdb24e7439a3ccc84b1417ca57de763d18b8e201fe6f9b4252534d504c6378ef" gracePeriod=2 Feb 01 15:29:28 crc kubenswrapper[4820]: I0201 15:29:28.709503 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ng5b5/must-gather-hlvcm"] Feb 01 15:29:28 crc kubenswrapper[4820]: I0201 15:29:28.989413 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ng5b5_must-gather-hlvcm_e5777edd-acdc-4e8c-952c-fca23e3a1311/copy/0.log" Feb 01 15:29:28 crc kubenswrapper[4820]: I0201 15:29:28.994720 4820 generic.go:334] "Generic (PLEG): container finished" podID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerID="fdb24e7439a3ccc84b1417ca57de763d18b8e201fe6f9b4252534d504c6378ef" exitCode=143 Feb 01 15:29:29 crc kubenswrapper[4820]: I0201 15:29:29.184706 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ng5b5_must-gather-hlvcm_e5777edd-acdc-4e8c-952c-fca23e3a1311/copy/0.log" Feb 01 15:29:29 crc kubenswrapper[4820]: I0201 15:29:29.185269 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:29:29 crc kubenswrapper[4820]: I0201 15:29:29.201564 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s48tx\" (UniqueName: \"kubernetes.io/projected/e5777edd-acdc-4e8c-952c-fca23e3a1311-kube-api-access-s48tx\") pod \"e5777edd-acdc-4e8c-952c-fca23e3a1311\" (UID: \"e5777edd-acdc-4e8c-952c-fca23e3a1311\") " Feb 01 15:29:29 crc kubenswrapper[4820]: I0201 15:29:29.201788 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e5777edd-acdc-4e8c-952c-fca23e3a1311-must-gather-output\") pod \"e5777edd-acdc-4e8c-952c-fca23e3a1311\" (UID: \"e5777edd-acdc-4e8c-952c-fca23e3a1311\") " Feb 01 15:29:29 crc kubenswrapper[4820]: I0201 15:29:29.212787 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5777edd-acdc-4e8c-952c-fca23e3a1311-kube-api-access-s48tx" (OuterVolumeSpecName: "kube-api-access-s48tx") pod "e5777edd-acdc-4e8c-952c-fca23e3a1311" (UID: "e5777edd-acdc-4e8c-952c-fca23e3a1311"). InnerVolumeSpecName "kube-api-access-s48tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:29:29 crc kubenswrapper[4820]: I0201 15:29:29.303679 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s48tx\" (UniqueName: \"kubernetes.io/projected/e5777edd-acdc-4e8c-952c-fca23e3a1311-kube-api-access-s48tx\") on node \"crc\" DevicePath \"\"" Feb 01 15:29:29 crc kubenswrapper[4820]: I0201 15:29:29.364399 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5777edd-acdc-4e8c-952c-fca23e3a1311-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e5777edd-acdc-4e8c-952c-fca23e3a1311" (UID: "e5777edd-acdc-4e8c-952c-fca23e3a1311"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:29:29 crc kubenswrapper[4820]: I0201 15:29:29.406442 4820 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e5777edd-acdc-4e8c-952c-fca23e3a1311-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 01 15:29:30 crc kubenswrapper[4820]: I0201 15:29:30.007180 4820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ng5b5_must-gather-hlvcm_e5777edd-acdc-4e8c-952c-fca23e3a1311/copy/0.log" Feb 01 15:29:30 crc kubenswrapper[4820]: I0201 15:29:30.007997 4820 scope.go:117] "RemoveContainer" containerID="fdb24e7439a3ccc84b1417ca57de763d18b8e201fe6f9b4252534d504c6378ef" Feb 01 15:29:30 crc kubenswrapper[4820]: I0201 15:29:30.008149 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ng5b5/must-gather-hlvcm" Feb 01 15:29:30 crc kubenswrapper[4820]: I0201 15:29:30.047959 4820 scope.go:117] "RemoveContainer" containerID="1fe266e3e3eccdeacc76bb4b2478eddd4e69a5b9986e361958afed0eba17e795" Feb 01 15:29:31 crc kubenswrapper[4820]: I0201 15:29:31.208789 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5777edd-acdc-4e8c-952c-fca23e3a1311" path="/var/lib/kubelet/pods/e5777edd-acdc-4e8c-952c-fca23e3a1311/volumes" Feb 01 15:29:49 crc kubenswrapper[4820]: I0201 15:29:49.242700 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:29:49 crc kubenswrapper[4820]: I0201 15:29:49.243541 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:29:49 crc kubenswrapper[4820]: I0201 15:29:49.243619 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 15:29:49 crc kubenswrapper[4820]: I0201 15:29:49.245056 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cc1d690e95fc450ab3fa44774018efb8f1b2cbe45f56a3847bca6c5ddad7518"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 15:29:49 crc kubenswrapper[4820]: I0201 15:29:49.245148 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://3cc1d690e95fc450ab3fa44774018efb8f1b2cbe45f56a3847bca6c5ddad7518" gracePeriod=600 Feb 01 15:29:50 crc kubenswrapper[4820]: I0201 15:29:50.195093 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="3cc1d690e95fc450ab3fa44774018efb8f1b2cbe45f56a3847bca6c5ddad7518" exitCode=0 Feb 01 15:29:50 crc kubenswrapper[4820]: I0201 15:29:50.195159 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"3cc1d690e95fc450ab3fa44774018efb8f1b2cbe45f56a3847bca6c5ddad7518"} Feb 01 15:29:50 crc kubenswrapper[4820]: I0201 15:29:50.195706 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerStarted","Data":"63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389"} Feb 01 15:29:50 crc kubenswrapper[4820]: I0201 15:29:50.195735 4820 scope.go:117] "RemoveContainer" containerID="7a9cb1594fcad44ddcc482a07c59abaa9ea0b65f24b10d15efcd853cb9f22b91" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.212675 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87"] Feb 01 15:30:00 crc kubenswrapper[4820]: E0201 15:30:00.213824 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerName="gather" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.213844 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerName="gather" Feb 01 15:30:00 crc kubenswrapper[4820]: E0201 15:30:00.213897 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="extract-content" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.213907 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="extract-content" Feb 01 15:30:00 crc kubenswrapper[4820]: E0201 15:30:00.213930 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerName="copy" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.213939 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerName="copy" Feb 01 15:30:00 crc kubenswrapper[4820]: E0201 15:30:00.213956 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="registry-server" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.213964 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="registry-server" Feb 01 15:30:00 crc kubenswrapper[4820]: E0201 15:30:00.213979 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="extract-utilities" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.213988 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="extract-utilities" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.214195 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b4ece8-9445-4399-ad3b-fe17f985fe9d" containerName="registry-server" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.214230 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerName="gather" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.214240 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5777edd-acdc-4e8c-952c-fca23e3a1311" containerName="copy" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.215005 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.219614 4820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.219696 4820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.229359 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87"] Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.408835 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83adf5bd-8050-4679-875b-bc4f5e086a5a-secret-volume\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.409193 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7q2r\" (UniqueName: \"kubernetes.io/projected/83adf5bd-8050-4679-875b-bc4f5e086a5a-kube-api-access-t7q2r\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.409450 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83adf5bd-8050-4679-875b-bc4f5e086a5a-config-volume\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.511105 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7q2r\" (UniqueName: \"kubernetes.io/projected/83adf5bd-8050-4679-875b-bc4f5e086a5a-kube-api-access-t7q2r\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.511237 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83adf5bd-8050-4679-875b-bc4f5e086a5a-config-volume\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.511265 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83adf5bd-8050-4679-875b-bc4f5e086a5a-secret-volume\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.513477 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83adf5bd-8050-4679-875b-bc4f5e086a5a-config-volume\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.518231 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83adf5bd-8050-4679-875b-bc4f5e086a5a-secret-volume\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.535456 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7q2r\" (UniqueName: \"kubernetes.io/projected/83adf5bd-8050-4679-875b-bc4f5e086a5a-kube-api-access-t7q2r\") pod \"collect-profiles-29499330-t9m87\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:00 crc kubenswrapper[4820]: I0201 15:30:00.566149 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:01 crc kubenswrapper[4820]: W0201 15:30:01.065290 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83adf5bd_8050_4679_875b_bc4f5e086a5a.slice/crio-c4be222bb98f04c2d7e5e7d19831bbc677c015949d09b1f306e603832e6fbfca WatchSource:0}: Error finding container c4be222bb98f04c2d7e5e7d19831bbc677c015949d09b1f306e603832e6fbfca: Status 404 returned error can't find the container with id c4be222bb98f04c2d7e5e7d19831bbc677c015949d09b1f306e603832e6fbfca Feb 01 15:30:01 crc kubenswrapper[4820]: I0201 15:30:01.071414 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87"] Feb 01 15:30:01 crc kubenswrapper[4820]: I0201 15:30:01.333207 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" event={"ID":"83adf5bd-8050-4679-875b-bc4f5e086a5a","Type":"ContainerStarted","Data":"9c65ba154060a1b44fb05fdd70e864ee4208cee045c822768bc6ef590280cf4b"} Feb 01 15:30:01 crc kubenswrapper[4820]: I0201 15:30:01.333658 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" event={"ID":"83adf5bd-8050-4679-875b-bc4f5e086a5a","Type":"ContainerStarted","Data":"c4be222bb98f04c2d7e5e7d19831bbc677c015949d09b1f306e603832e6fbfca"} Feb 01 15:30:01 crc kubenswrapper[4820]: I0201 15:30:01.367036 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" podStartSLOduration=1.367014659 podStartE2EDuration="1.367014659s" podCreationTimestamp="2026-02-01 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 15:30:01.348625964 +0000 UTC m=+4142.868992248" watchObservedRunningTime="2026-02-01 15:30:01.367014659 +0000 UTC m=+4142.887380953" Feb 01 15:30:02 crc kubenswrapper[4820]: I0201 15:30:02.373996 4820 generic.go:334] "Generic (PLEG): container finished" podID="83adf5bd-8050-4679-875b-bc4f5e086a5a" containerID="9c65ba154060a1b44fb05fdd70e864ee4208cee045c822768bc6ef590280cf4b" exitCode=0 Feb 01 15:30:02 crc kubenswrapper[4820]: I0201 15:30:02.374465 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" event={"ID":"83adf5bd-8050-4679-875b-bc4f5e086a5a","Type":"ContainerDied","Data":"9c65ba154060a1b44fb05fdd70e864ee4208cee045c822768bc6ef590280cf4b"} Feb 01 15:30:03 crc kubenswrapper[4820]: I0201 15:30:03.829244 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:03 crc kubenswrapper[4820]: I0201 15:30:03.994323 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83adf5bd-8050-4679-875b-bc4f5e086a5a-secret-volume\") pod \"83adf5bd-8050-4679-875b-bc4f5e086a5a\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " Feb 01 15:30:03 crc kubenswrapper[4820]: I0201 15:30:03.994868 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7q2r\" (UniqueName: \"kubernetes.io/projected/83adf5bd-8050-4679-875b-bc4f5e086a5a-kube-api-access-t7q2r\") pod \"83adf5bd-8050-4679-875b-bc4f5e086a5a\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " Feb 01 15:30:03 crc kubenswrapper[4820]: I0201 15:30:03.995149 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83adf5bd-8050-4679-875b-bc4f5e086a5a-config-volume\") pod \"83adf5bd-8050-4679-875b-bc4f5e086a5a\" (UID: \"83adf5bd-8050-4679-875b-bc4f5e086a5a\") " Feb 01 15:30:03 crc kubenswrapper[4820]: I0201 15:30:03.995727 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83adf5bd-8050-4679-875b-bc4f5e086a5a-config-volume" (OuterVolumeSpecName: "config-volume") pod "83adf5bd-8050-4679-875b-bc4f5e086a5a" (UID: "83adf5bd-8050-4679-875b-bc4f5e086a5a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.001066 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83adf5bd-8050-4679-875b-bc4f5e086a5a-kube-api-access-t7q2r" (OuterVolumeSpecName: "kube-api-access-t7q2r") pod "83adf5bd-8050-4679-875b-bc4f5e086a5a" (UID: "83adf5bd-8050-4679-875b-bc4f5e086a5a"). InnerVolumeSpecName "kube-api-access-t7q2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.001471 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83adf5bd-8050-4679-875b-bc4f5e086a5a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83adf5bd-8050-4679-875b-bc4f5e086a5a" (UID: "83adf5bd-8050-4679-875b-bc4f5e086a5a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.098075 4820 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83adf5bd-8050-4679-875b-bc4f5e086a5a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.098141 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7q2r\" (UniqueName: \"kubernetes.io/projected/83adf5bd-8050-4679-875b-bc4f5e086a5a-kube-api-access-t7q2r\") on node \"crc\" DevicePath \"\"" Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.098162 4820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83adf5bd-8050-4679-875b-bc4f5e086a5a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.402003 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" event={"ID":"83adf5bd-8050-4679-875b-bc4f5e086a5a","Type":"ContainerDied","Data":"c4be222bb98f04c2d7e5e7d19831bbc677c015949d09b1f306e603832e6fbfca"} Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.402047 4820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4be222bb98f04c2d7e5e7d19831bbc677c015949d09b1f306e603832e6fbfca" Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.402105 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29499330-t9m87" Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.473135 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx"] Feb 01 15:30:04 crc kubenswrapper[4820]: I0201 15:30:04.485456 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29499285-f4lrx"] Feb 01 15:30:05 crc kubenswrapper[4820]: I0201 15:30:05.215467 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6821db-9b76-463b-b4ef-cbb8315b3666" path="/var/lib/kubelet/pods/4b6821db-9b76-463b-b4ef-cbb8315b3666/volumes" Feb 01 15:30:12 crc kubenswrapper[4820]: I0201 15:30:12.661611 4820 scope.go:117] "RemoveContainer" containerID="10c3254dffd9946f68dbe7972ad9af89e93a9b9ef49e71985be377a6ee4a6d18" Feb 01 15:30:12 crc kubenswrapper[4820]: I0201 15:30:12.698686 4820 scope.go:117] "RemoveContainer" containerID="fdcff894861229422e925786a0de4a197f08e66b75421e97365ce2e81486eb84" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.837018 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-48wnr"] Feb 01 15:30:39 crc kubenswrapper[4820]: E0201 15:30:39.838970 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83adf5bd-8050-4679-875b-bc4f5e086a5a" containerName="collect-profiles" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.839003 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="83adf5bd-8050-4679-875b-bc4f5e086a5a" containerName="collect-profiles" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.839533 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="83adf5bd-8050-4679-875b-bc4f5e086a5a" containerName="collect-profiles" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.842409 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.853348 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48wnr"] Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.858418 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-utilities\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.858677 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-catalog-content\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.858800 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9m8s\" (UniqueName: \"kubernetes.io/projected/ec1539e9-f96d-457d-9786-83737325ef13-kube-api-access-c9m8s\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.960442 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-catalog-content\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.960765 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9m8s\" (UniqueName: \"kubernetes.io/projected/ec1539e9-f96d-457d-9786-83737325ef13-kube-api-access-c9m8s\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.961027 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-utilities\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.961145 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-catalog-content\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.961537 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-utilities\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:39 crc kubenswrapper[4820]: I0201 15:30:39.984084 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9m8s\" (UniqueName: \"kubernetes.io/projected/ec1539e9-f96d-457d-9786-83737325ef13-kube-api-access-c9m8s\") pod \"redhat-marketplace-48wnr\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:40 crc kubenswrapper[4820]: I0201 15:30:40.206451 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:40 crc kubenswrapper[4820]: I0201 15:30:40.731609 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48wnr"] Feb 01 15:30:41 crc kubenswrapper[4820]: I0201 15:30:41.166686 4820 generic.go:334] "Generic (PLEG): container finished" podID="ec1539e9-f96d-457d-9786-83737325ef13" containerID="c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df" exitCode=0 Feb 01 15:30:41 crc kubenswrapper[4820]: I0201 15:30:41.168193 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48wnr" event={"ID":"ec1539e9-f96d-457d-9786-83737325ef13","Type":"ContainerDied","Data":"c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df"} Feb 01 15:30:41 crc kubenswrapper[4820]: I0201 15:30:41.168218 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48wnr" event={"ID":"ec1539e9-f96d-457d-9786-83737325ef13","Type":"ContainerStarted","Data":"c17f522d8a4f166b6333a36c26c24d84f9511ff43c1667b221d3ace46dec81ac"} Feb 01 15:30:42 crc kubenswrapper[4820]: I0201 15:30:42.200034 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48wnr" event={"ID":"ec1539e9-f96d-457d-9786-83737325ef13","Type":"ContainerStarted","Data":"ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996"} Feb 01 15:30:43 crc kubenswrapper[4820]: I0201 15:30:43.212629 4820 generic.go:334] "Generic (PLEG): container finished" podID="ec1539e9-f96d-457d-9786-83737325ef13" containerID="ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996" exitCode=0 Feb 01 15:30:43 crc kubenswrapper[4820]: I0201 15:30:43.212691 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48wnr" event={"ID":"ec1539e9-f96d-457d-9786-83737325ef13","Type":"ContainerDied","Data":"ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996"} Feb 01 15:30:44 crc kubenswrapper[4820]: I0201 15:30:44.223205 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48wnr" event={"ID":"ec1539e9-f96d-457d-9786-83737325ef13","Type":"ContainerStarted","Data":"0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f"} Feb 01 15:30:44 crc kubenswrapper[4820]: I0201 15:30:44.247911 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-48wnr" podStartSLOduration=2.77463897 podStartE2EDuration="5.247892924s" podCreationTimestamp="2026-02-01 15:30:39 +0000 UTC" firstStartedPulling="2026-02-01 15:30:41.169987582 +0000 UTC m=+4182.690353856" lastFinishedPulling="2026-02-01 15:30:43.643241516 +0000 UTC m=+4185.163607810" observedRunningTime="2026-02-01 15:30:44.242318062 +0000 UTC m=+4185.762684356" watchObservedRunningTime="2026-02-01 15:30:44.247892924 +0000 UTC m=+4185.768259208" Feb 01 15:30:50 crc kubenswrapper[4820]: I0201 15:30:50.206990 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:50 crc kubenswrapper[4820]: I0201 15:30:50.207727 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:50 crc kubenswrapper[4820]: I0201 15:30:50.294983 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:50 crc kubenswrapper[4820]: I0201 15:30:50.364663 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:50 crc kubenswrapper[4820]: I0201 15:30:50.543145 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48wnr"] Feb 01 15:30:52 crc kubenswrapper[4820]: I0201 15:30:52.303567 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-48wnr" podUID="ec1539e9-f96d-457d-9786-83737325ef13" containerName="registry-server" containerID="cri-o://0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f" gracePeriod=2 Feb 01 15:30:52 crc kubenswrapper[4820]: I0201 15:30:52.867440 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:52 crc kubenswrapper[4820]: I0201 15:30:52.974362 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-utilities\") pod \"ec1539e9-f96d-457d-9786-83737325ef13\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " Feb 01 15:30:52 crc kubenswrapper[4820]: I0201 15:30:52.974417 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9m8s\" (UniqueName: \"kubernetes.io/projected/ec1539e9-f96d-457d-9786-83737325ef13-kube-api-access-c9m8s\") pod \"ec1539e9-f96d-457d-9786-83737325ef13\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " Feb 01 15:30:52 crc kubenswrapper[4820]: I0201 15:30:52.974459 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-catalog-content\") pod \"ec1539e9-f96d-457d-9786-83737325ef13\" (UID: \"ec1539e9-f96d-457d-9786-83737325ef13\") " Feb 01 15:30:52 crc kubenswrapper[4820]: I0201 15:30:52.976141 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-utilities" (OuterVolumeSpecName: "utilities") pod "ec1539e9-f96d-457d-9786-83737325ef13" (UID: "ec1539e9-f96d-457d-9786-83737325ef13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:30:52 crc kubenswrapper[4820]: I0201 15:30:52.996133 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1539e9-f96d-457d-9786-83737325ef13-kube-api-access-c9m8s" (OuterVolumeSpecName: "kube-api-access-c9m8s") pod "ec1539e9-f96d-457d-9786-83737325ef13" (UID: "ec1539e9-f96d-457d-9786-83737325ef13"). InnerVolumeSpecName "kube-api-access-c9m8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.077115 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.077157 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9m8s\" (UniqueName: \"kubernetes.io/projected/ec1539e9-f96d-457d-9786-83737325ef13-kube-api-access-c9m8s\") on node \"crc\" DevicePath \"\"" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.299612 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec1539e9-f96d-457d-9786-83737325ef13" (UID: "ec1539e9-f96d-457d-9786-83737325ef13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.315016 4820 generic.go:334] "Generic (PLEG): container finished" podID="ec1539e9-f96d-457d-9786-83737325ef13" containerID="0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f" exitCode=0 Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.315077 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48wnr" event={"ID":"ec1539e9-f96d-457d-9786-83737325ef13","Type":"ContainerDied","Data":"0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f"} Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.315118 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48wnr" event={"ID":"ec1539e9-f96d-457d-9786-83737325ef13","Type":"ContainerDied","Data":"c17f522d8a4f166b6333a36c26c24d84f9511ff43c1667b221d3ace46dec81ac"} Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.315146 4820 scope.go:117] "RemoveContainer" containerID="0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.315079 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48wnr" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.335346 4820 scope.go:117] "RemoveContainer" containerID="ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.353022 4820 scope.go:117] "RemoveContainer" containerID="c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.384244 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1539e9-f96d-457d-9786-83737325ef13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.432609 4820 scope.go:117] "RemoveContainer" containerID="0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f" Feb 01 15:30:53 crc kubenswrapper[4820]: E0201 15:30:53.433279 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f\": container with ID starting with 0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f not found: ID does not exist" containerID="0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.433334 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f"} err="failed to get container status \"0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f\": rpc error: code = NotFound desc = could not find container \"0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f\": container with ID starting with 0668e2ff101ede5add37c1addc7d57a19441d12098fdc96d4e79749ebd62f01f not found: ID does not exist" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.433365 4820 scope.go:117] "RemoveContainer" containerID="ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996" Feb 01 15:30:53 crc kubenswrapper[4820]: E0201 15:30:53.434306 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996\": container with ID starting with ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996 not found: ID does not exist" containerID="ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.434407 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996"} err="failed to get container status \"ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996\": rpc error: code = NotFound desc = could not find container \"ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996\": container with ID starting with ab6b740ac4a2d0c76fb247d2fbde39320c976680d4642950c782f74100cfe996 not found: ID does not exist" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.434483 4820 scope.go:117] "RemoveContainer" containerID="c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df" Feb 01 15:30:53 crc kubenswrapper[4820]: E0201 15:30:53.435277 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df\": container with ID starting with c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df not found: ID does not exist" containerID="c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.435311 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df"} err="failed to get container status \"c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df\": rpc error: code = NotFound desc = could not find container \"c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df\": container with ID starting with c6283886952988b249f274a4c1319c62b719468e5183784082651a8f644ea6df not found: ID does not exist" Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.501899 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48wnr"] Feb 01 15:30:53 crc kubenswrapper[4820]: I0201 15:30:53.511503 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-48wnr"] Feb 01 15:30:55 crc kubenswrapper[4820]: I0201 15:30:55.213853 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1539e9-f96d-457d-9786-83737325ef13" path="/var/lib/kubelet/pods/ec1539e9-f96d-457d-9786-83737325ef13/volumes" Feb 01 15:31:12 crc kubenswrapper[4820]: I0201 15:31:12.836931 4820 scope.go:117] "RemoveContainer" containerID="a06e762da5995739168e538284883904b74dce125b0a62cdebec3551d4113bad" Feb 01 15:31:12 crc kubenswrapper[4820]: I0201 15:31:12.873401 4820 scope.go:117] "RemoveContainer" containerID="85f05aaa00cb83f11459f7002f09ba7a5761f11c9d4a962bca9dab703e71f08e" Feb 01 15:31:12 crc kubenswrapper[4820]: I0201 15:31:12.942582 4820 scope.go:117] "RemoveContainer" containerID="ff5edea11289235bbe4d7c69704f625431020c33d647b0102079285e1f456855" Feb 01 15:31:12 crc kubenswrapper[4820]: I0201 15:31:12.992246 4820 scope.go:117] "RemoveContainer" containerID="19e157507210ccedb533fbce2ae7cf49d61d9e2a1df3a331d512308cfb86f9bf" Feb 01 15:31:49 crc kubenswrapper[4820]: I0201 15:31:49.242930 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:31:49 crc kubenswrapper[4820]: I0201 15:31:49.243582 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:32:19 crc kubenswrapper[4820]: I0201 15:32:19.243234 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:32:19 crc kubenswrapper[4820]: I0201 15:32:19.243955 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.242294 4820 patch_prober.go:28] interesting pod/machine-config-daemon-w8vbg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.243023 4820 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.243078 4820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.244006 4820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389"} pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.244077 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" containerName="machine-config-daemon" containerID="cri-o://63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" gracePeriod=600 Feb 01 15:32:49 crc kubenswrapper[4820]: E0201 15:32:49.372037 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.703284 4820 generic.go:334] "Generic (PLEG): container finished" podID="060a9e0b-803f-4ccc-bed6-92614d449527" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" exitCode=0 Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.703392 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" event={"ID":"060a9e0b-803f-4ccc-bed6-92614d449527","Type":"ContainerDied","Data":"63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389"} Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.703705 4820 scope.go:117] "RemoveContainer" containerID="3cc1d690e95fc450ab3fa44774018efb8f1b2cbe45f56a3847bca6c5ddad7518" Feb 01 15:32:49 crc kubenswrapper[4820]: I0201 15:32:49.704675 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:32:49 crc kubenswrapper[4820]: E0201 15:32:49.705034 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:33:02 crc kubenswrapper[4820]: I0201 15:33:02.200479 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:33:02 crc kubenswrapper[4820]: E0201 15:33:02.201519 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:33:17 crc kubenswrapper[4820]: I0201 15:33:17.202371 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:33:17 crc kubenswrapper[4820]: E0201 15:33:17.205492 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:33:29 crc kubenswrapper[4820]: I0201 15:33:29.221631 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:33:29 crc kubenswrapper[4820]: E0201 15:33:29.222822 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:33:40 crc kubenswrapper[4820]: I0201 15:33:40.198789 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:33:40 crc kubenswrapper[4820]: E0201 15:33:40.199746 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:33:53 crc kubenswrapper[4820]: I0201 15:33:53.199485 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:33:53 crc kubenswrapper[4820]: E0201 15:33:53.200281 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:34:04 crc kubenswrapper[4820]: I0201 15:34:04.199559 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:34:04 crc kubenswrapper[4820]: E0201 15:34:04.200478 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.217016 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wv8p5"] Feb 01 15:34:07 crc kubenswrapper[4820]: E0201 15:34:07.218392 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1539e9-f96d-457d-9786-83737325ef13" containerName="registry-server" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.218419 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1539e9-f96d-457d-9786-83737325ef13" containerName="registry-server" Feb 01 15:34:07 crc kubenswrapper[4820]: E0201 15:34:07.218456 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1539e9-f96d-457d-9786-83737325ef13" containerName="extract-utilities" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.218468 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1539e9-f96d-457d-9786-83737325ef13" containerName="extract-utilities" Feb 01 15:34:07 crc kubenswrapper[4820]: E0201 15:34:07.218484 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1539e9-f96d-457d-9786-83737325ef13" containerName="extract-content" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.218496 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1539e9-f96d-457d-9786-83737325ef13" containerName="extract-content" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.218829 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1539e9-f96d-457d-9786-83737325ef13" containerName="registry-server" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.221196 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wv8p5"] Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.221412 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.294513 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-utilities\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.294558 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvkt\" (UniqueName: \"kubernetes.io/projected/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-kube-api-access-rqvkt\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.294704 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-catalog-content\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.396193 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-utilities\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.396248 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqvkt\" (UniqueName: \"kubernetes.io/projected/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-kube-api-access-rqvkt\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.396366 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-catalog-content\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.396777 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-utilities\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.396895 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-catalog-content\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.415082 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqvkt\" (UniqueName: \"kubernetes.io/projected/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-kube-api-access-rqvkt\") pod \"community-operators-wv8p5\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:07 crc kubenswrapper[4820]: I0201 15:34:07.554345 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:08 crc kubenswrapper[4820]: I0201 15:34:08.055064 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wv8p5"] Feb 01 15:34:08 crc kubenswrapper[4820]: I0201 15:34:08.665764 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv8p5" event={"ID":"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040","Type":"ContainerStarted","Data":"7cf6fef465854fe8b8f32f1d53680ae145ad19dd0b938b8781361a3084511ae7"} Feb 01 15:34:09 crc kubenswrapper[4820]: I0201 15:34:09.685688 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerID="eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab" exitCode=0 Feb 01 15:34:09 crc kubenswrapper[4820]: I0201 15:34:09.685750 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv8p5" event={"ID":"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040","Type":"ContainerDied","Data":"eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab"} Feb 01 15:34:09 crc kubenswrapper[4820]: I0201 15:34:09.690153 4820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 15:34:10 crc kubenswrapper[4820]: I0201 15:34:10.721834 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv8p5" event={"ID":"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040","Type":"ContainerStarted","Data":"2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e"} Feb 01 15:34:11 crc kubenswrapper[4820]: I0201 15:34:11.730070 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerID="2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e" exitCode=0 Feb 01 15:34:11 crc kubenswrapper[4820]: I0201 15:34:11.730113 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv8p5" event={"ID":"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040","Type":"ContainerDied","Data":"2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e"} Feb 01 15:34:12 crc kubenswrapper[4820]: I0201 15:34:12.743027 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv8p5" event={"ID":"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040","Type":"ContainerStarted","Data":"f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e"} Feb 01 15:34:12 crc kubenswrapper[4820]: I0201 15:34:12.764833 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wv8p5" podStartSLOduration=3.348400967 podStartE2EDuration="5.764817977s" podCreationTimestamp="2026-02-01 15:34:07 +0000 UTC" firstStartedPulling="2026-02-01 15:34:09.689742361 +0000 UTC m=+4391.210108685" lastFinishedPulling="2026-02-01 15:34:12.106159401 +0000 UTC m=+4393.626525695" observedRunningTime="2026-02-01 15:34:12.762310558 +0000 UTC m=+4394.282676872" watchObservedRunningTime="2026-02-01 15:34:12.764817977 +0000 UTC m=+4394.285184261" Feb 01 15:34:15 crc kubenswrapper[4820]: I0201 15:34:15.200205 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:34:15 crc kubenswrapper[4820]: E0201 15:34:15.200838 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:34:17 crc kubenswrapper[4820]: I0201 15:34:17.554706 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:17 crc kubenswrapper[4820]: I0201 15:34:17.554980 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:17 crc kubenswrapper[4820]: I0201 15:34:17.628602 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:17 crc kubenswrapper[4820]: I0201 15:34:17.873089 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:17 crc kubenswrapper[4820]: I0201 15:34:17.938654 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wv8p5"] Feb 01 15:34:19 crc kubenswrapper[4820]: I0201 15:34:19.817152 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wv8p5" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerName="registry-server" containerID="cri-o://f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e" gracePeriod=2 Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.271380 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.386950 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqvkt\" (UniqueName: \"kubernetes.io/projected/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-kube-api-access-rqvkt\") pod \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.387142 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-utilities\") pod \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.387391 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-catalog-content\") pod \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\" (UID: \"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040\") " Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.388400 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-utilities" (OuterVolumeSpecName: "utilities") pod "4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" (UID: "4bb1ca27-9cab-4ffb-9df9-6f91c3f95040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.389449 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.393110 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-kube-api-access-rqvkt" (OuterVolumeSpecName: "kube-api-access-rqvkt") pod "4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" (UID: "4bb1ca27-9cab-4ffb-9df9-6f91c3f95040"). InnerVolumeSpecName "kube-api-access-rqvkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.461235 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" (UID: "4bb1ca27-9cab-4ffb-9df9-6f91c3f95040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.491677 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.491714 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqvkt\" (UniqueName: \"kubernetes.io/projected/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040-kube-api-access-rqvkt\") on node \"crc\" DevicePath \"\"" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.830372 4820 generic.go:334] "Generic (PLEG): container finished" podID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerID="f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e" exitCode=0 Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.830423 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv8p5" event={"ID":"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040","Type":"ContainerDied","Data":"f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e"} Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.830491 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wv8p5" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.830780 4820 scope.go:117] "RemoveContainer" containerID="f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.830762 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv8p5" event={"ID":"4bb1ca27-9cab-4ffb-9df9-6f91c3f95040","Type":"ContainerDied","Data":"7cf6fef465854fe8b8f32f1d53680ae145ad19dd0b938b8781361a3084511ae7"} Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.855736 4820 scope.go:117] "RemoveContainer" containerID="2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e" Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.880951 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wv8p5"] Feb 01 15:34:20 crc kubenswrapper[4820]: I0201 15:34:20.887961 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wv8p5"] Feb 01 15:34:21 crc kubenswrapper[4820]: I0201 15:34:21.053715 4820 scope.go:117] "RemoveContainer" containerID="eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab" Feb 01 15:34:21 crc kubenswrapper[4820]: I0201 15:34:21.091704 4820 scope.go:117] "RemoveContainer" containerID="f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e" Feb 01 15:34:21 crc kubenswrapper[4820]: E0201 15:34:21.092085 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e\": container with ID starting with f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e not found: ID does not exist" containerID="f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e" Feb 01 15:34:21 crc kubenswrapper[4820]: I0201 15:34:21.092139 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e"} err="failed to get container status \"f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e\": rpc error: code = NotFound desc = could not find container \"f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e\": container with ID starting with f5c781e749a69e6ae16615d85cd28d3cbe6e24a7e3d0cb8bbce337b8695e6f7e not found: ID does not exist" Feb 01 15:34:21 crc kubenswrapper[4820]: I0201 15:34:21.092172 4820 scope.go:117] "RemoveContainer" containerID="2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e" Feb 01 15:34:21 crc kubenswrapper[4820]: E0201 15:34:21.092557 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e\": container with ID starting with 2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e not found: ID does not exist" containerID="2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e" Feb 01 15:34:21 crc kubenswrapper[4820]: I0201 15:34:21.092586 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e"} err="failed to get container status \"2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e\": rpc error: code = NotFound desc = could not find container \"2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e\": container with ID starting with 2a8983360d56c5a1a28030b0114ebf96dcef63773d480a6124f59b4e392ff47e not found: ID does not exist" Feb 01 15:34:21 crc kubenswrapper[4820]: I0201 15:34:21.092605 4820 scope.go:117] "RemoveContainer" containerID="eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab" Feb 01 15:34:21 crc kubenswrapper[4820]: E0201 15:34:21.092861 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab\": container with ID starting with eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab not found: ID does not exist" containerID="eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab" Feb 01 15:34:21 crc kubenswrapper[4820]: I0201 15:34:21.092958 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab"} err="failed to get container status \"eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab\": rpc error: code = NotFound desc = could not find container \"eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab\": container with ID starting with eaa09fcacf0f84ee78381a26626d710a4f37b62a45848b2f08bd0851262f5cab not found: ID does not exist" Feb 01 15:34:21 crc kubenswrapper[4820]: I0201 15:34:21.209397 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" path="/var/lib/kubelet/pods/4bb1ca27-9cab-4ffb-9df9-6f91c3f95040/volumes" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.304407 4820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qc42r"] Feb 01 15:34:23 crc kubenswrapper[4820]: E0201 15:34:23.305264 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerName="extract-content" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.305282 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerName="extract-content" Feb 01 15:34:23 crc kubenswrapper[4820]: E0201 15:34:23.305310 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerName="extract-utilities" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.305317 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerName="extract-utilities" Feb 01 15:34:23 crc kubenswrapper[4820]: E0201 15:34:23.305340 4820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerName="registry-server" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.305348 4820 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerName="registry-server" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.305604 4820 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb1ca27-9cab-4ffb-9df9-6f91c3f95040" containerName="registry-server" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.307348 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.315431 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qc42r"] Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.352082 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txjz4\" (UniqueName: \"kubernetes.io/projected/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-kube-api-access-txjz4\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.352394 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-catalog-content\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.352802 4820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-utilities\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.455160 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-utilities\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.455308 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txjz4\" (UniqueName: \"kubernetes.io/projected/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-kube-api-access-txjz4\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.455336 4820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-catalog-content\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.455681 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-utilities\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.455800 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-catalog-content\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.478313 4820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txjz4\" (UniqueName: \"kubernetes.io/projected/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-kube-api-access-txjz4\") pod \"certified-operators-qc42r\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:23 crc kubenswrapper[4820]: I0201 15:34:23.624463 4820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:24 crc kubenswrapper[4820]: W0201 15:34:24.139008 4820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf443f007_bd6f_4a9b_bae8_3bf4b61bbe17.slice/crio-b92314fc7ac9ca15a6614bfd37c394a6eb0e4415845b670cae80740cf6717f5e WatchSource:0}: Error finding container b92314fc7ac9ca15a6614bfd37c394a6eb0e4415845b670cae80740cf6717f5e: Status 404 returned error can't find the container with id b92314fc7ac9ca15a6614bfd37c394a6eb0e4415845b670cae80740cf6717f5e Feb 01 15:34:24 crc kubenswrapper[4820]: I0201 15:34:24.140676 4820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qc42r"] Feb 01 15:34:24 crc kubenswrapper[4820]: I0201 15:34:24.894839 4820 generic.go:334] "Generic (PLEG): container finished" podID="f443f007-bd6f-4a9b-bae8-3bf4b61bbe17" containerID="de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3" exitCode=0 Feb 01 15:34:24 crc kubenswrapper[4820]: I0201 15:34:24.895025 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qc42r" event={"ID":"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17","Type":"ContainerDied","Data":"de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3"} Feb 01 15:34:24 crc kubenswrapper[4820]: I0201 15:34:24.895530 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qc42r" event={"ID":"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17","Type":"ContainerStarted","Data":"b92314fc7ac9ca15a6614bfd37c394a6eb0e4415845b670cae80740cf6717f5e"} Feb 01 15:34:26 crc kubenswrapper[4820]: I0201 15:34:26.917061 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qc42r" event={"ID":"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17","Type":"ContainerStarted","Data":"768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25"} Feb 01 15:34:27 crc kubenswrapper[4820]: I0201 15:34:27.930349 4820 generic.go:334] "Generic (PLEG): container finished" podID="f443f007-bd6f-4a9b-bae8-3bf4b61bbe17" containerID="768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25" exitCode=0 Feb 01 15:34:27 crc kubenswrapper[4820]: I0201 15:34:27.930412 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qc42r" event={"ID":"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17","Type":"ContainerDied","Data":"768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25"} Feb 01 15:34:28 crc kubenswrapper[4820]: I0201 15:34:28.946152 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qc42r" event={"ID":"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17","Type":"ContainerStarted","Data":"80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7"} Feb 01 15:34:28 crc kubenswrapper[4820]: I0201 15:34:28.977346 4820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qc42r" podStartSLOduration=2.501048697 podStartE2EDuration="5.97732595s" podCreationTimestamp="2026-02-01 15:34:23 +0000 UTC" firstStartedPulling="2026-02-01 15:34:24.90151028 +0000 UTC m=+4406.421876574" lastFinishedPulling="2026-02-01 15:34:28.377787543 +0000 UTC m=+4409.898153827" observedRunningTime="2026-02-01 15:34:28.971140794 +0000 UTC m=+4410.491507118" watchObservedRunningTime="2026-02-01 15:34:28.97732595 +0000 UTC m=+4410.497692254" Feb 01 15:34:29 crc kubenswrapper[4820]: I0201 15:34:29.210752 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:34:29 crc kubenswrapper[4820]: E0201 15:34:29.211304 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527" Feb 01 15:34:33 crc kubenswrapper[4820]: I0201 15:34:33.625403 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:33 crc kubenswrapper[4820]: I0201 15:34:33.625687 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:33 crc kubenswrapper[4820]: I0201 15:34:33.667433 4820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:34 crc kubenswrapper[4820]: I0201 15:34:34.080816 4820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:34 crc kubenswrapper[4820]: I0201 15:34:34.157042 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qc42r"] Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.019015 4820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qc42r" podUID="f443f007-bd6f-4a9b-bae8-3bf4b61bbe17" containerName="registry-server" containerID="cri-o://80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7" gracePeriod=2 Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.560888 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.639631 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txjz4\" (UniqueName: \"kubernetes.io/projected/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-kube-api-access-txjz4\") pod \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.639847 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-catalog-content\") pod \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.639944 4820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-utilities\") pod \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\" (UID: \"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17\") " Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.641416 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-utilities" (OuterVolumeSpecName: "utilities") pod "f443f007-bd6f-4a9b-bae8-3bf4b61bbe17" (UID: "f443f007-bd6f-4a9b-bae8-3bf4b61bbe17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.645169 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-kube-api-access-txjz4" (OuterVolumeSpecName: "kube-api-access-txjz4") pod "f443f007-bd6f-4a9b-bae8-3bf4b61bbe17" (UID: "f443f007-bd6f-4a9b-bae8-3bf4b61bbe17"). InnerVolumeSpecName "kube-api-access-txjz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.714148 4820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f443f007-bd6f-4a9b-bae8-3bf4b61bbe17" (UID: "f443f007-bd6f-4a9b-bae8-3bf4b61bbe17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.743124 4820 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.743219 4820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txjz4\" (UniqueName: \"kubernetes.io/projected/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-kube-api-access-txjz4\") on node \"crc\" DevicePath \"\"" Feb 01 15:34:36 crc kubenswrapper[4820]: I0201 15:34:36.743245 4820 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.034505 4820 generic.go:334] "Generic (PLEG): container finished" podID="f443f007-bd6f-4a9b-bae8-3bf4b61bbe17" containerID="80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7" exitCode=0 Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.034577 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qc42r" event={"ID":"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17","Type":"ContainerDied","Data":"80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7"} Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.034603 4820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qc42r" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.034640 4820 scope.go:117] "RemoveContainer" containerID="80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.034619 4820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qc42r" event={"ID":"f443f007-bd6f-4a9b-bae8-3bf4b61bbe17","Type":"ContainerDied","Data":"b92314fc7ac9ca15a6614bfd37c394a6eb0e4415845b670cae80740cf6717f5e"} Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.066066 4820 scope.go:117] "RemoveContainer" containerID="768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.107573 4820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qc42r"] Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.143071 4820 scope.go:117] "RemoveContainer" containerID="de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.151272 4820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qc42r"] Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.175651 4820 scope.go:117] "RemoveContainer" containerID="80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7" Feb 01 15:34:37 crc kubenswrapper[4820]: E0201 15:34:37.176040 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7\": container with ID starting with 80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7 not found: ID does not exist" containerID="80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.176081 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7"} err="failed to get container status \"80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7\": rpc error: code = NotFound desc = could not find container \"80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7\": container with ID starting with 80d56ef5a935646054c55c4dd7987afd3650b884f3a170bc0fdcc319fb85b4b7 not found: ID does not exist" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.176106 4820 scope.go:117] "RemoveContainer" containerID="768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25" Feb 01 15:34:37 crc kubenswrapper[4820]: E0201 15:34:37.176363 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25\": container with ID starting with 768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25 not found: ID does not exist" containerID="768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.176396 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25"} err="failed to get container status \"768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25\": rpc error: code = NotFound desc = could not find container \"768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25\": container with ID starting with 768417de981b8e281d637b450f99c228cbc775998905150ce4f754ad8c4b5b25 not found: ID does not exist" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.176415 4820 scope.go:117] "RemoveContainer" containerID="de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3" Feb 01 15:34:37 crc kubenswrapper[4820]: E0201 15:34:37.176603 4820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3\": container with ID starting with de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3 not found: ID does not exist" containerID="de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.176629 4820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3"} err="failed to get container status \"de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3\": rpc error: code = NotFound desc = could not find container \"de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3\": container with ID starting with de952be8c9ad7b25b9e67aa492f15be83cc7e18b33fa6ddcfff5d92cb6cf17d3 not found: ID does not exist" Feb 01 15:34:37 crc kubenswrapper[4820]: I0201 15:34:37.209076 4820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f443f007-bd6f-4a9b-bae8-3bf4b61bbe17" path="/var/lib/kubelet/pods/f443f007-bd6f-4a9b-bae8-3bf4b61bbe17/volumes" Feb 01 15:34:43 crc kubenswrapper[4820]: I0201 15:34:43.198617 4820 scope.go:117] "RemoveContainer" containerID="63cafd697363e7174d48445ee004a950f65cf6adb914081c94aa8df849d55389" Feb 01 15:34:43 crc kubenswrapper[4820]: E0201 15:34:43.199987 4820 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w8vbg_openshift-machine-config-operator(060a9e0b-803f-4ccc-bed6-92614d449527)\"" pod="openshift-machine-config-operator/machine-config-daemon-w8vbg" podUID="060a9e0b-803f-4ccc-bed6-92614d449527"